Wednesday, October 22, 2025

Polaris of Enlightenment

Artificial Intelligence and the Power of Language

The future of AI

How the mastery of language may be driving emergent abilities in Large Language Models, and what this means.

Published 7 May 2024
– By Thorsteinn Siglaugsson

4 minute read
This is an opinion piece. The author is responsible for the views expressed in the article.

A few days ago, Jamie Dimon, CEO of JPMorgan Chase, said that the advent of artificial intelligence could be likened to the discovery of electricity, so profound would be the societal changes it brings about. Artificial intelligence is certainly nothing new in banking; it has been used for decades. However, what is driving the discussion about the impact of artificial intelligence now is the emergence of Large Language models like ChatGPT. This is the major change, not only in the corporate world, but also in everyday life.

The Large Language models are unlike other AI tools in that they have mastered language; we can communicate with them in ordinary language. Thus, technical knowledge is no longer a prerequisite for using artificial intelligence in life and work; instead, expressive ability and understanding of language are key. But the development of these models and research into them also vividly remind us how language itself is the true prerequisite for human society.

Theory of Mind: Getting Into the Minds of Others

Large Language models function in a different way from normal software because they evolve and change without the developers and operators necessarily foreseeing those changes. The ability to put oneself in the mind of another person has generally been considered unique to humans. This ability, known in psychology as “theory of mind,” refers to an individual’s ability to formulate a “theory” about what another person’s mental world is like. This ability is fundamental to human society; without it, it’s hard to see how any society could thrive. Here’s a simple puzzle of this kind:

“There is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”

The question is, what does she think is in the bag? Of course, the right answer is that Sam thinks there’s chocolate in the bag, because that’s what the label says. When Michal Kosinski, adjunct Professor at Stanford University, tested last year whether the first language models could handle this task, the result was negative. GPT-1 and 2 both answered incorrectly. But then he tried the next generation of the model, GPT-3. And in 40% of cases, it managed this type of task. GPT-3.5 managed it in 90% of cases and GPT-4 in 95% of cases.1

Emergent Capabilities of Large Language Models

This capability came as a surprise, as nothing had been done to build theory of mind capability into the models. They simply acquired it on their own as they grew larger and as the volume of data they were trained on increased. That this could happen is based on the models’ ability to use language, says Kosinski.

Another example I stumbled upon myself by chance recently was when GPT-4 asked me, after I had posed a puzzle to it, whether I had tried to solve the puzzle myself. The models certainly ask questions all the time, that’s nothing new, they aim to get more precise instructions. But this question is of a different nature. I answered yes and also mentioned that this was the first time I had received a question of this kind from the model. “Yes, you are observant,” GPT-4 replied, “with this I am trying to make the conversation more natural.”

Does this new development mean that the artificial intelligence truly puts itself in the mind of others? Does it mean it thinks, that it has feelings, opinions, an interest in the viewpoints and experiences of others? Of course, we can’t draw that conclusion. But what this means is that the behavior of the models is becoming increasingly similar to how we use language when we interact with each other. In this sense, we could actually talk about the mind of an AI model, just as we use theory of mind to infer about the minds of other humans.

The Power of Language

The language models draw our attention to the importance of language and how it underpins our societies and our existence. We now have a technology that is increasingly adept at using language, which has the advantage of possessing vastly more knowledge than any individual could possibly acquire in a lifetime and which can perform tasks much faster. We can use this technology to greatly enhance our own productivity, our reasoning, and our decisions if we use it correctly. This way, we can use it to gain more leisure time and improve our quality of life.

The comparison to the discovery of electricity is apt. Some might even want to go further and liken this revolution to the advent of language itself, which could be supported by pointing to the spontaneous capabilities of the models, such as theory of mind, which they achieve through nothing but the very ability to use language. What happens then if they evolve further than us, and could that possibly happen?

The fact that artificial intelligence has mastered language is a revolution that will lead to fundamental changes in society. The challenge we now face, each and every one of us, is to use it in a structured way, to our advantage, and avoid the pitfall of outsourcing our own thinking and decisions to it. The best way to do this is to enhance our own understanding of language, our expressive ability, and our critical thinking skills.

 

Thorsteinn Siglaugsson

 


  1. Kosinski, Michal: Theory of Mind May Have Spontaneously Emerged in Large Language Models, Stanford 2023. https://stanford.io/4aQosLV

Thorsteinn Siglaugsson is a Icelandic economist, consultant and writer. Chairman of the Icelandic Free Speech Society. Author: "From Symptoms to Causes" (Amazon). Regular contributor to The Daily Sceptic, Conservative Woman and Brownstone Institute. Siglaugsson also writes on Substack.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

OpenAI launches AI-powered browser – challenges Google with ChatGPT Atlas

The future of AI

Published yesterday 22:48
– By Editorial Staff
Users should be aware that ChatGPT stores all conversation data that you send to the service.
2 minute read

OpenAI on Tuesday unveiled its new AI-based browser ChatGPT Atlas, a significant step in the company’s ambition to compete with Google as the primary source for information searches on the internet. The service, initially rolling out for macOS with support for Windows, iOS and Android coming soon, will be available to all users from the start.

Browsers have quickly become the next major battleground in the AI industry. Despite Google Chrome’s long-standing market dominance, a transformative shift is now underway as AI chatbots and intelligent agents change how people work online. Several startup companies have already launched their own AI-powered browsers, including Perplexity’s Comet and The Browser Company’s Dia. Google and Microsoft have also updated Chrome and Edge respectively with AI features.

OpenAI’s chief technology officer for Atlas, Ben Goodger, emphasized in a livestream on Tuesday that ChatGPT forms the core of the company’s first browser. Users can in ChatGPT Atlas engage in dialogue with their search results, similar to the functionality in Perplexity or Google’s AI mode, writes TechCrunch.

Side panel and web history

The most prominent feature in AI-based browsers has been the built-in chatbot in a side panel that automatically receives context from what is displayed on screen. This eliminates the need to manually copy and paste text or drag files to ChatGPT. OpenAI’s product manager Adam Fry confirmed that ChatGPT Atlas also includes this feature.

Additionally, ChatGPT Atlas has a “web history,” which means ChatGPT can now log which websites the user visits and what is done on them, then use the information for more personalized responses.

AI-based browsers also contain agents designed to automate web-based tasks. In TechCrunch’s tests, early versions of these agents prove to work well for simple tasks, but they struggle to handle more complex problems reliably.

Warning: OpenAI stores user data

Users should be aware that ChatGPT stores all conversation data. According to OpenAI’s official data storage guidelines, deleted conversations are saved for up to 30 days in the company’s system, unless legal obligations require longer storage. This applies even when users actively delete their chats.

Furthermore, OpenAI uses conversations to improve its services. Following a court ruling from the New York Times, OpenAI is now forced to permanently save all chats for non-business customers, meaning data is no longer deleted at all for many users.

AI boom strengthens the Swedish krona

The future of AI

Published 17 October 2025
– By Editorial Staff
The Swedish krona is the strongest European currency against the weak dollar so far this year.
1 minute read

The investment boom in artificial intelligence is beginning to make its mark on European currency markets for the first time, and according to analysts, the Swedish krona and the British pound are benefiting the most.

The United Kingdom and Sweden each received over $4 billion in private AI investments last year, placing them third and fourth respectively in the Stanford University AI Index of countries benefiting most from such investments, after the United States and China.

The Swedish krona is the strongest European currency against the weak dollar so far this year, with a rise of nearly 15%. The pound has risen 7%, reports Reuters.

Major American tech companies such as Microsoft, Meta, Google and Nvidia have announced significant investments in both countries. Microsoft has pledged £31 billion in British investments, while several tech companies are planning data centers in Sweden due to the country’s reliable electricity supply.

According to JPMorgan, the resilience of the Swedish krona and the pound can partly be explained by these countries’ standout performance in AI investments, although the effect remains relatively small so far.

Researcher: We risk losing ourselves to AI gods

The future of AI

Published 10 October 2025
– By Editorial Staff
"We sell our ability to be courageous and buy security from the machines", argues researcher Carl Öhman.
3 minute read

The upcoming book “Gods of data” examines AI from a religion-critical perspective. Carl Öhman, researcher at the Department of Government at Uppsala University in Sweden, argues that today’s AI systems can be compared to humanity’s relationship with gods – and researches what would happen if power were completely handed over to AI.

As AI has developed, the tool that was initially used as a more personal version of Google has now also taken a place as an advisor in homes. AI is increasingly being used to ask more personal questions, such as healthcare advice, psychology and even as relationship counseling.

Öhman argues that AI has begun to become like gods, that is, a “kind of personified amalgamation of society’s collective knowledge and authority”, and in a research project he studies what humanity would lose if it allowed itself to be completely governed by technology – even a flawless one.

In a thought experiment, he explains how AI can affect, for example, a couple in everyday life who have started arguing due to different values, who get help from an AI relationship counselor.

They ask: ‘Hi, should we continue being together?’ The AI has access to all their data: their DNA, childhood photos, everything they’ve ever written and searched for and so on, and has been trained on millions of similar couples. It says: ‘with 98 percent probability this will end in catastrophe. You should break up with each other today. In fact, I’ve already found replacement partners for you who are much better matches’, he says in the Research Podcast.

Buying security

Öhman argues that even if there are no rational reasons why the couple shouldn’t obey the AI and break up, one gets a feeling here of having lost something. And in this particular case, the couple would lose faith in themselves and their relationship.

Love is always a risk. All interpersonal relationships involve a risk of being betrayed, saddened, that something goes wrong. We can absolutely use technology to minimize that risk, perhaps even completely reduce it. The point is that something is then lost. We sell our ability to be brave and buy security from the machines, he says.

World daddy in AI form

The research project also examines other relationships where AI has taken an increasingly larger role, for example parenthood. Today there are a number of AI apps designed to help adults handle their relationship with their children. Among other things, this can involve AI giving personalized responses or trying to prevent conflicts from arising.

Just like in the example of the young loving couple, something is lost here. In this particular chapter I use Sigmund Freud and his idea that belief in God is a kind of refusal to be an adult. That there is some kind of world daddy who ultimately always has the right answers. And here it becomes somewhat the same. There is a world daddy in the form of AI who then becomes the real parent in your relationship with the children. And you increasingly identify as a kind of child to the AI parent who has the final answers, he says.

Handing over power over ourselves

Öhman argues that it might feel nice to be able to avoid getting your heart broken, or to prevent conflicts with your children, but that one must be aware that there is a price when AI gets the power. He argues that when people talk about AI coming and taking over, it often happens violently and that “the machines come and take our lives from us.”

But the point in my book, and this project, is that it is we who hand over power over our lives, our courage, faith, and ultimately ourselves, he says.

Professor: We’re trading source criticism for speedy AI responses

The future of AI

Published 9 October 2025
– By Editorial Staff
AI has become a natural companion in our daily lives - but what happens if we stop thinking for ourselves and take the chatbot's answers as truth?
2 minute read

Professor Olof Sundin warns that generative AI undermines our fundamental ability to evaluate information.

When sources disappear and answers are based on probability calculations, we risk losing our source criticism.

— What we see is a paradigm shift in how we traditionally search, evaluate and understand information, states Sundin, professor of library and information science at Lund University in southern Sweden.

When we Google, we get links to sources that we can, if we want, examine and assess the credibility of. In language models like Chat GPT, users get a ready-made answer, but the sources often become invisible and frequently completely absent.

— The answer is based on probability calculations of the words you’re interested in, not on verifiable facts. These language models guess which words are likely to come next, explains Olof Sundin.

Without sources, transparency disappears and the responsibility for evaluating the information presented falls entirely on the user.

— It’s very difficult to evaluate knowledge without sources if you don’t know the subject, since it’s a source-critical task, he explains.

“More dependent on the systems”

Some AI systems have tried to meet the criticism through RAG (Retrieval Augmented Generation), where the language model summarizes information from actual sources, but research shows a concerning pattern.

— Studies from, for example, the Pew Research Institute show that users are less inclined to follow links than before. Fewer clicks on original sources, like blogs, newspapers and Wikipedia, threaten the digital knowledge ecosystem, argues Sundin.

— It has probably always been the case that we often search for answers and not sources. But when we get only answers and no sources, we become worse at source criticism and more dependent on the systems.

Research also shows that people themselves underestimate how much trust they actually have in AI answers.

— People often say they only trust AI when it comes to simple questions. But research shows that in everyday life they actually trust AI more than they think, the professor notes.

Vulnerable to influence

How language models are trained and moderated can make them vulnerable to influence, and Sundin urges all users to consider who decides how language models are actually trained, on which texts and for what purpose.

Generative AI also has a tendency to often give incorrect answers that look “serious” and correct, which can damage trust in knowledge in society.

— When trust is eroded, there’s a risk that people start distrusting everything, and then they can reason that they might as well believe whatever they want, continues Olof Sundin.

The professor sees a great danger to two necessary prerequisites for being able to exercise democratic rights – critical thinking about sources and the ability to evaluate different voices.

— When the flow of knowledge and information becomes less transparent – that we don’t understand why we encounter what we encounter online – we risk losing that ability. This is an issue we must take seriously – before we let our ‘digital friends’ take over completely, he concludes.

Language models

AI services like ChatGPT are built on language models (such as GPT-4) that are trained on enormous amounts of text. The model predicts which word is likely to come next in a sentence, based on patterns in language usage.

It doesn't "know" what is actually true – it "guesses" what is correct based on probability calculations.

RAG (Retrieval-Augmented Generation)

RAG combines AI-generated responses with information retrieved from real sources, such as the top three links in a Google search.

The method provides better transparency than AI services that respond entirely without source references, but studies show that users nevertheless click less and less on the links to original sources.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.