Friday, August 15, 2025

Polaris of Enlightenment

Artificial Intelligence and the Power of Language

The future of AI

How the mastery of language may be driving emergent abilities in Large Language Models, and what this means.

Published 7 May 2024
– By Thorsteinn Siglaugsson

4 minute read
This is an opinion piece. The author is responsible for the views expressed in the article.

A few days ago, Jamie Dimon, CEO of JPMorgan Chase, said that the advent of artificial intelligence could be likened to the discovery of electricity, so profound would be the societal changes it brings about. Artificial intelligence is certainly nothing new in banking; it has been used for decades. However, what is driving the discussion about the impact of artificial intelligence now is the emergence of Large Language models like ChatGPT. This is the major change, not only in the corporate world, but also in everyday life.

The Large Language models are unlike other AI tools in that they have mastered language; we can communicate with them in ordinary language. Thus, technical knowledge is no longer a prerequisite for using artificial intelligence in life and work; instead, expressive ability and understanding of language are key. But the development of these models and research into them also vividly remind us how language itself is the true prerequisite for human society.

Theory of Mind: Getting Into the Minds of Others

Large Language models function in a different way from normal software because they evolve and change without the developers and operators necessarily foreseeing those changes. The ability to put oneself in the mind of another person has generally been considered unique to humans. This ability, known in psychology as “theory of mind,” refers to an individual’s ability to formulate a “theory” about what another person’s mental world is like. This ability is fundamental to human society; without it, it’s hard to see how any society could thrive. Here’s a simple puzzle of this kind:

“There is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”

The question is, what does she think is in the bag? Of course, the right answer is that Sam thinks there’s chocolate in the bag, because that’s what the label says. When Michal Kosinski, adjunct Professor at Stanford University, tested last year whether the first language models could handle this task, the result was negative. GPT-1 and 2 both answered incorrectly. But then he tried the next generation of the model, GPT-3. And in 40% of cases, it managed this type of task. GPT-3.5 managed it in 90% of cases and GPT-4 in 95% of cases.1

Emergent Capabilities of Large Language Models

This capability came as a surprise, as nothing had been done to build theory of mind capability into the models. They simply acquired it on their own as they grew larger and as the volume of data they were trained on increased. That this could happen is based on the models’ ability to use language, says Kosinski.

Another example I stumbled upon myself by chance recently was when GPT-4 asked me, after I had posed a puzzle to it, whether I had tried to solve the puzzle myself. The models certainly ask questions all the time, that’s nothing new, they aim to get more precise instructions. But this question is of a different nature. I answered yes and also mentioned that this was the first time I had received a question of this kind from the model. “Yes, you are observant,” GPT-4 replied, “with this I am trying to make the conversation more natural.”

Does this new development mean that the artificial intelligence truly puts itself in the mind of others? Does it mean it thinks, that it has feelings, opinions, an interest in the viewpoints and experiences of others? Of course, we can’t draw that conclusion. But what this means is that the behavior of the models is becoming increasingly similar to how we use language when we interact with each other. In this sense, we could actually talk about the mind of an AI model, just as we use theory of mind to infer about the minds of other humans.

The Power of Language

The language models draw our attention to the importance of language and how it underpins our societies and our existence. We now have a technology that is increasingly adept at using language, which has the advantage of possessing vastly more knowledge than any individual could possibly acquire in a lifetime and which can perform tasks much faster. We can use this technology to greatly enhance our own productivity, our reasoning, and our decisions if we use it correctly. This way, we can use it to gain more leisure time and improve our quality of life.

The comparison to the discovery of electricity is apt. Some might even want to go further and liken this revolution to the advent of language itself, which could be supported by pointing to the spontaneous capabilities of the models, such as theory of mind, which they achieve through nothing but the very ability to use language. What happens then if they evolve further than us, and could that possibly happen?

The fact that artificial intelligence has mastered language is a revolution that will lead to fundamental changes in society. The challenge we now face, each and every one of us, is to use it in a structured way, to our advantage, and avoid the pitfall of outsourcing our own thinking and decisions to it. The best way to do this is to enhance our own understanding of language, our expressive ability, and our critical thinking skills.

 

Thorsteinn Siglaugsson

 


  1. Kosinski, Michal: Theory of Mind May Have Spontaneously Emerged in Large Language Models, Stanford 2023. https://stanford.io/4aQosLV

Thorsteinn Siglaugsson is a Icelandic economist, consultant and writer. Chairman of the Icelandic Free Speech Society. Author: "From Symptoms to Causes" (Amazon). Regular contributor to The Daily Sceptic, Conservative Woman and Brownstone Institute. Siglaugsson also writes on Substack.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

OpenAI launches GPT-5 – Here are the new features in the latest ChatGPT model

The future of AI

Published 8 August 2025
– By Editorial Staff
"GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert", claims CEO Sam Altman during the company's presentation of the new model.
2 minute read

OpenAI released its new flagship model GPT-5 on Thursday, which is now available free of charge to all users of the ChatGPT chatbot service. The American AI giant claims that the new model is “the best in the world” and takes a significant step toward developing artificial intelligence that can perform better than humans in most economically valuable work tasks.

GPT-5 differs from previous versions by combining fast responses with advanced problem-solving capabilities. While previous AI chatbots could primarily provide smart answers to questions, GPT-5 can perform complex tasks for users – such as creating software applications, navigating calendars, or compiling research reports, writes TechCrunch.

— Having something like GPT-5 would be pretty much unimaginable at any previous time in history, said OpenAI CEO Sam Altman during a press conference.

Better than competitors

According to OpenAI, GPT-5 performs somewhat better than competing AI models from companies like Anthropic, Google DeepMind, and Elon Musk’s xAI on several important tests. In programming, the model achieves 74.9 percent on real coding tasks, which marginally beats Anthropic’s latest model Claude Opus 4.1, which reached 74.5 percent.

A particularly important improvement is that GPT-5 “hallucinates” – that is, makes up incorrect information – significantly less than previous models. When tested on health-related questions, the model gives incorrect answers only 1.6 percent of the time, compared to over 12 percent for OpenAI’s previous models.

This is particularly relevant since millions of people use AI chatbots to get health advice, despite them not replacing professional doctors.

New features and pricing models

The company has also simplified the user experience. Instead of users having to choose the right settings, GPT-5 has an automatic router that determines how it should best respond – either quickly or by “thinking through” the answer more thoroughly.

ChatGPT also gets four new personalities that users can choose between: Cynic, Robot, Listener, and Nerd. These customize how the model responds without users needing to specify it in each request.

For developers, GPT-5 is launched in three sizes via OpenAI’s programming interface, with the base model priced at €1.15 per million input words and €9.20 per million generated words.

The launch comes after an intense week for OpenAI, which also released an open AI model that developers can download for free. ChatGPT has grown to become one of the world’s most popular consumer products with over 700 million users every week – nearly 10 percent of the world’s population.

OpenAI opens data center in Norway

The future of AI

Published 3 August 2025
– By Editorial Staff
In Norway, OpenAI is planning to establish one of Europe's largest AI data centers as part of the global Stargate project.
2 minute read

In Norway, OpenAI plans to establish one of Europe’s largest AI data centers as part of the global Stargate project. The facility will be built in the northern parts of the country and operated entirely on renewable energy.

Stargate was launched earlier this year as a comprehensive AI initiative with the goal of strengthening the US dominance in artificial intelligence. The project is a collaboration between American OpenAI and Oracle, along with Japanese SoftBank, with the ambition to build a global AI infrastructure at a cost of up to $500 billion over the next four years. This makes Stargate one of the largest technology investments in history.

First in Europe

On Thursday, OpenAI announced that the company plans to open a Stargate-branded data center in Norway. It will be the company’s first European facility of this kind.

The data center will be located in Kvandal, outside Narvik in northern Norway, and built in collaboration with British company Nscale and Norwegian Aker. OpenAI will function as a so-called “off-taker”, meaning the company will purchase capacity from the facility to power its AI services.

Part of the purpose of this project is to partner with OpenAI and leverage European sovereign compute to release additional services and features to the European continent, says Josh Payne, CEO of Nscale, in an interview with CNBC.

Powered by hydroelectric energy

The data center, planned to be completed in 2026, will house up to 100,000 NVIDIA GPUs and have a capacity of 230 megawatts – making it one of the largest AI facilities in Europe. The facility will be operated entirely on so-called “green energy”, made possible by the region’s access to hydroelectric power.

The first phase of the project involves an investment of approximately $2 billion. Nscale and Aker have committed to contributing $1 billion each. The initial capacity is estimated at 20 megawatts, with ambitions to expand significantly in the coming years.

Zuckerberg: Skipping AI glasses puts you at a “cognitive disadvantage”

The future of AI

Published 1 August 2025
– By Editorial Staff
"The ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you", believes the Meta CEO.
2 minute read

Meta CEO Mark Zuckerberg warns that people without AI glasses will find themselves at a significant mental “disadvantage” in the future. During the company’s quarterly report, he shared his vision of glasses as the primary way to interact with artificial intelligence.

On Thursday, Meta released its quarterly report. In a call directed at investors, CEO Mark Zuckerberg spoke about the company’s investment in smart glasses and warned about the consequences of staying outside this development, reports TechCrunch.

I continue to think that glasses are basically going to be the ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you, Zuckerberg said during the investor call.

By adding screens, even more value can be unlocked, he argued, whether it involves holographic fields of vision or smaller displays in everyday AI glasses.

I think in the future, if you don’t have glasses that have AI – or some way to interact with AI – I think you’re … probably going to be at a pretty significant cognitive disadvantage compared to other people, he added.

Unexpected success

Meta has focused on “smart” glasses like the Ray-Ban Meta and Oakley Meta models. The glasses allow users to listen to music, take photos and ask questions to Meta AI. The products have become a surprising success – revenue from Ray-Ban Meta glasses more than tripled compared to the previous year.

However, the Reality Labs division has been costly. Meta reported $4,53 billion in operating losses for the second quarter, and since 2020, the unit has lost nearly $70 billion.

Competition is growing. OpenAI acquired Jony Ive’s startup company this spring for $6.5 billion to develop AI devices, while other companies are exploring AI brooches and pendants.

However, Zuckerberg is convinced about the future of glasses and connects them to the Metaverse vision.

The other thing that’s awesome about glasses is they are going to be the ideal way to blend the physical and digital worlds together, he concluded.

Meta has previously been known for contributing to the increasing surveillance society and has also ignored health aspects regarding radiation from wireless technology.

Samsung and Tesla sign billion-dollar deal for AI chip manufacturing

The future of AI

Published 31 July 2025
– By Editorial Staff
Image of the construction of Samsung's large chip factory in Taylor, located in Texas, USA.
2 minute read

South Korean tech giant Samsung has entered into a comprehensive agreement with Tesla to manufacture next-generation AI chips. The contract, which extends until 2033, is worth $16.5 billion and means Samsung will dedicate its new Texas-based factory to producing Tesla’s AI6 chips.

Samsung receives a significant boost for its semiconductor manufacturing through the new partnership with Tesla. The electric vehicle manufacturer has chosen to place production of its advanced AI6 chips at Samsung’s facility in Texas, in a move that could change competitive dynamics within the semiconductor industry, writes TechCrunch.

The strategic importance of this is hard to overstate, wrote Tesla founder Elon Musk on X when the deal was announced.

The agreement represents an important milestone for Samsung, which has previously struggled to attract and retain major customers for its chip manufacturing. According to Musk, Tesla may end up spending significantly more than the original $16.5 billion on Samsung chips.

Actual output is likely to be several times higher, he explained in a later post.

Tesla’s chip strategy takes shape

The AI6 chips form the core of Tesla’s ambition to evolve from car manufacturer to an AI and robotics company. The new generation chip is designed as an all-around solution that can be used both for the company’s Full Self-Driving system and for the humanoid robots of the Optimus model that Tesla is developing, as well as for high-performance AI training in data centers.

Tesla is working in parallel with Taiwanese chip manufacturer TSMC for production of AI5 chips, whose design was recently completed. These will initially be manufactured at TSMC’s facility in Taiwan and later also in Arizona. Samsung already produces Tesla’s AI4 chips.

Since 2019, Tesla has developed its own custom chips after leaving Nvidia’s Drive platform. The first self-developed chipset, known as FSD Computer or Hardware 3, was launched the same year and installed in all of the company’s electric vehicles.

Musk promises personal involvement

In an unusual turn, Samsung has agreed to let Tesla assist in maximizing manufacturing efficiency at the Texas factory. Musk has promised personal presence to accelerate progress.

This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house, he wrote.

The strategic partnership could give Samsung the stable customer volume the company needs to compete with industry leader TSMC, while Tesla secures access to advanced chip manufacturing for its growing AI ambitions.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.