Wednesday, November 5, 2025

Polaris of Enlightenment

Artificial Intelligence and the Power of Language

The future of AI

How the mastery of language may be driving emergent abilities in Large Language Models, and what this means.

Updated May 8, 2024, Published May 7, 2024 – By Thorsteinn Siglaugsson

This is an opinion piece. The author is responsible for the views expressed in the article.

A few days ago, Jamie Dimon, CEO of JPMorgan Chase, said that the advent of artificial intelligence could be likened to the discovery of electricity, so profound would be the societal changes it brings about. Artificial intelligence is certainly nothing new in banking; it has been used for decades. However, what is driving the discussion about the impact of artificial intelligence now is the emergence of Large Language models like ChatGPT. This is the major change, not only in the corporate world, but also in everyday life.

The Large Language models are unlike other AI tools in that they have mastered language; we can communicate with them in ordinary language. Thus, technical knowledge is no longer a prerequisite for using artificial intelligence in life and work; instead, expressive ability and understanding of language are key. But the development of these models and research into them also vividly remind us how language itself is the true prerequisite for human society.

Theory of Mind: Getting Into the Minds of Others

Large Language models function in a different way from normal software because they evolve and change without the developers and operators necessarily foreseeing those changes. The ability to put oneself in the mind of another person has generally been considered unique to humans. This ability, known in psychology as "theory of mind," refers to an individual's ability to formulate a "theory" about what another person's mental world is like. This ability is fundamental to human society; without it, it's hard to see how any society could thrive. Here's a simple puzzle of this kind:

"There is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”

The question is, what does she think is in the bag? Of course, the right answer is that Sam thinks there's chocolate in the bag, because that's what the label says. When Michal Kosinski, adjunct Professor at Stanford University, tested last year whether the first language models could handle this task, the result was negative. GPT-1 and 2 both answered incorrectly. But then he tried the next generation of the model, GPT-3. And in 40% of cases, it managed this type of task. GPT-3.5 managed it in 90% of cases and GPT-4 in 95% of cases.1

Emergent Capabilities of Large Language Models

This capability came as a surprise, as nothing had been done to build theory of mind capability into the models. They simply acquired it on their own as they grew larger and as the volume of data they were trained on increased. That this could happen is based on the models' ability to use language, says Kosinski.

Another example I stumbled upon myself by chance recently was when GPT-4 asked me, after I had posed a puzzle to it, whether I had tried to solve the puzzle myself. The models certainly ask questions all the time, that's nothing new, they aim to get more precise instructions. But this question is of a different nature. I answered yes and also mentioned that this was the first time I had received a question of this kind from the model. "Yes, you are observant," GPT-4 replied, "with this I am trying to make the conversation more natural."

Does this new development mean that the artificial intelligence truly puts itself in the mind of others? Does it mean it thinks, that it has feelings, opinions, an interest in the viewpoints and experiences of others? Of course, we can't draw that conclusion. But what this means is that the behavior of the models is becoming increasingly similar to how we use language when we interact with each other. In this sense, we could actually talk about the mind of an AI model, just as we use theory of mind to infer about the minds of other humans.

The Power of Language

The language models draw our attention to the importance of language and how it underpins our societies and our existence. We now have a technology that is increasingly adept at using language, which has the advantage of possessing vastly more knowledge than any individual could possibly acquire in a lifetime and which can perform tasks much faster. We can use this technology to greatly enhance our own productivity, our reasoning, and our decisions if we use it correctly. This way, we can use it to gain more leisure time and improve our quality of life.

The comparison to the discovery of electricity is apt. Some might even want to go further and liken this revolution to the advent of language itself, which could be supported by pointing to the spontaneous capabilities of the models, such as theory of mind, which they achieve through nothing but the very ability to use language. What happens then if they evolve further than us, and could that possibly happen?

The fact that artificial intelligence has mastered language is a revolution that will lead to fundamental changes in society. The challenge we now face, each and every one of us, is to use it in a structured way, to our advantage, and avoid the pitfall of outsourcing our own thinking and decisions to it. The best way to do this is to enhance our own understanding of language, our expressive ability, and our critical thinking skills.

 

Thorsteinn Siglaugsson

 


  1. Kosinski, Michal: Theory of Mind May Have Spontaneously Emerged in Large Language Models, Stanford 2023. https://stanford.io/4aQosLV

Thorsteinn Siglaugsson is a Icelandic economist, consultant and writer. Chairman of the Icelandic Free Speech Society. Author: "From Symptoms to Causes" (Amazon). Regular contributor to The Daily Sceptic, Conservative Woman and Brownstone Institute. Siglaugsson also writes on Substack.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Musk plans data centers in space using Starlink satellites

The future of AI

Published November 2, 2025 – By Editorial staff
Photo: Space X

Elon Musk's space company SpaceX announces plans to build data centers in space based on Starlink satellites. Interest in space-based data storage is surging among tech giants as artificial intelligence demands increasingly more computing power.

Artificial intelligence is driving a growing need for data storage and processing power, prompting several tech companies to turn their attention to space. After former Google CEO Eric Schmidt acquired space company Relativity Space in May, and Amazon founder Jeff Bezos predicted gigawatt-scale data centers in space within 10 to 20 years, Elon Musk is now entering the race.

In a post on social media platform X, Musk explained that SpaceX satellites could be used for this purpose. "Simply scaling up Starlink V3 satellites, which have high speed laser links would work. SpaceX will be doing this", he wrote in response to an article about the potential for space-based data centers.

Musk's announcement dramatically raises the profile of this emerging industry. SpaceX's Starlink constellation is already the world's dominant space-based infrastructure, and the company has demonstrated it can profitably deliver high-speed broadband to millions of customers worldwide.

Free energy and no environmental costs

Advocates for space-based data centers highlight clear advantages: unlimited and free energy from the sun, as well as the absence of environmental costs associated with building these facilities on Earth, where opposition to energy-intensive data centers has begun to grow.

Critics argue, however, that it is economically impractical to build such facilities in space and that proponents underestimate the technology required to make it work.

Caleb Henry, research director at analytics firm Quilty Space, believes the development is worth watching closely.

— The amount of momentum from heavyweights in the tech industry is very much worth paying attention to. If they start putting money behind it, we could see another transformation of what's done in space, he says in an interview.

Tenfold capacity

SpaceX's current Starlink V2 mini satellites have a maximum download capacity of approximately 100 Gbps. The upcoming V3 satellite is expected to increase this capacity tenfold, to 1 Tbps. This is not an unprecedented capacity for individual satellites – telecom company Viasat has built a geostationary satellite with the same capacity that will soon be launched – but it is unprecedented at the scale SpaceX is planning.

The company intends to launch around 60 Starlink V3 satellites with each Starship rocket launch. These launches could occur as early as the first half of 2026, as SpaceX has already tested a satellite dispenser on Starship.

— Nothing else in the rest of the satellite industry that comes close to that amount of capacity, Henry notes.

Exactly what a "scaling up" of Starlink V3 satellites would look like is not clear, but the development speaks for itself. The first operational Starlink satellites were launched just over five years ago with a mass of approximately 300 kg and a capacity of 15 Gbps. Starlink V3 satellites will likely weigh 1,500 kg.

Musk praises Google’s quantum breakthrough: “Starting to become relevant”

The future of AI

Published October 23, 2025 – By Editorial staff
Google's quantum computer chip Willow running the Quantum Echoes algorithm is 13,000 times faster than classical supercomputers.

Google has developed a quantum computing algorithm that, according to the company, opens up practical applications in areas including pharmaceutical research and artificial intelligence. The new algorithm is several thousand times faster than classical supercomputers.

Google announced on Wednesday that the company has successfully developed and verified the Quantum Echoes algorithm on its Willow quantum computing chip. The algorithm is 13,000 times faster than the most advanced classical computing algorithms running on supercomputers.

According to the company's researchers, Quantum Echoes could be used in the future to measure molecular structures, which could facilitate the development of new pharmaceuticals. The algorithm may also help identify new materials in materials science.

Another application is generating unique datasets for training AI models, particularly in areas such as life sciences where available datasets are limited.

— If I can't prove that data is correct, how can I do anything with it?, explained Google researcher Tom O'Brien about the importance of the algorithm being verifiable.

Details about Quantum Echoes were published in the scientific journal Nature. Entrepreneur Elon Musk congratulated Google on X and noted that quantum computing is starting to become relevant.

Alphabet's Google is competing with other tech giants such as Amazon and Microsoft to develop quantum computers that can solve problems beyond the reach of today's computers.

Over half a billion Chinese users embrace generative AI

The future of AI

Published October 22, 2025 – By Editorial staff
AI services are used for intelligent search, content creation, as productivity tools, and in smart hardware.

The number of users of generative artificial intelligence in China has increased sharply during the first half of 2025. In June, 515 million Chinese people had access to AI services – an increase of 266 million in six months, according to official Chinese figures.

The data comes from a report presented on Saturday by the China Internet Network Information Center. It notes that domestically developed AI models have become popular among users.

A survey included in the report shows that over 90 percent of users say they prefer Chinese AI models.

Generative AI is being used in areas such as intelligent search, content creation, productivity tools and smart hardware. The technology is also being tested in agriculture, manufacturing and research.

The majority of users are young and middle-aged with higher education. Among users, 74.6 percent are under 40 years old, while 37.5 percent hold college, bachelor's or higher degrees.

The report claims that China has become increasingly important in the global AI field. As of April, the country had filed approximately 1.58 million AI-related patent applications, representing 38.58 percent of the global total – the most in the world.

OpenAI launches AI-powered browser – challenges Google with ChatGPT Atlas

The future of AI

Published October 21, 2025 – By Editorial staff
Users should be aware that ChatGPT stores all conversation data that you send to the service.

OpenAI on Tuesday unveiled its new AI-based browser ChatGPT Atlas, a significant step in the company's ambition to compete with Google as the primary source for information searches on the internet. The service, initially rolling out for macOS with support for Windows, iOS and Android coming soon, will be available to all users from the start.

Browsers have quickly become the next major battleground in the AI industry. Despite Google Chrome's long-standing market dominance, a transformative shift is now underway as AI chatbots and intelligent agents change how people work online. Several startup companies have already launched their own AI-powered browsers, including Perplexity's Comet and The Browser Company's Dia. Google and Microsoft have also updated Chrome and Edge respectively with AI features.

OpenAI's chief technology officer for Atlas, Ben Goodger, emphasized in a livestream on Tuesday that ChatGPT forms the core of the company's first browser. Users can in ChatGPT Atlas engage in dialogue with their search results, similar to the functionality in Perplexity or Google's AI mode, writes TechCrunch.

Side panel and web history

The most prominent feature in AI-based browsers has been the built-in chatbot in a side panel that automatically receives context from what is displayed on screen. This eliminates the need to manually copy and paste text or drag files to ChatGPT. OpenAI's product manager Adam Fry confirmed that ChatGPT Atlas also includes this feature.

Additionally, ChatGPT Atlas has a "web history," which means ChatGPT can now log which websites the user visits and what is done on them, then use the information for more personalized responses.

AI-based browsers also contain agents designed to automate web-based tasks. In TechCrunch's tests, early versions of these agents prove to work well for simple tasks, but they struggle to handle more complex problems reliably.

Warning: OpenAI stores user data

Users should be aware that ChatGPT stores all conversation data. According to OpenAI's official data storage guidelines, deleted conversations are saved for up to 30 days in the company's system, unless legal obligations require longer storage. This applies even when users actively delete their chats.

Furthermore, OpenAI uses conversations to improve its services. Following a court ruling from the New York Times, OpenAI is now forced to permanently save all chats for non-business customers, meaning data is no longer deleted at all for many users.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.