Wednesday, April 16, 2025

Polaris of Enlightenment

Ad:

Artificial Intelligence and the Power of Language

The future of AI

How the mastery of language may be driving emergent abilities in Large Language Models, and what this means.

Published 7 May 2024
– By Thorsteinn Siglaugsson

This is an opinion piece. The author is responsible for the views expressed in the article.

A few days ago, Jamie Dimon, CEO of JPMorgan Chase, said that the advent of artificial intelligence could be likened to the discovery of electricity, so profound would be the societal changes it brings about. Artificial intelligence is certainly nothing new in banking; it has been used for decades. However, what is driving the discussion about the impact of artificial intelligence now is the emergence of Large Language models like ChatGPT. This is the major change, not only in the corporate world, but also in everyday life.

The Large Language models are unlike other AI tools in that they have mastered language; we can communicate with them in ordinary language. Thus, technical knowledge is no longer a prerequisite for using artificial intelligence in life and work; instead, expressive ability and understanding of language are key. But the development of these models and research into them also vividly remind us how language itself is the true prerequisite for human society.

Theory of Mind: Getting Into the Minds of Others

Large Language models function in a different way from normal software because they evolve and change without the developers and operators necessarily foreseeing those changes. The ability to put oneself in the mind of another person has generally been considered unique to humans. This ability, known in psychology as “theory of mind,” refers to an individual’s ability to formulate a “theory” about what another person’s mental world is like. This ability is fundamental to human society; without it, it’s hard to see how any society could thrive. Here’s a simple puzzle of this kind:

“There is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”

The question is, what does she think is in the bag? Of course, the right answer is that Sam thinks there’s chocolate in the bag, because that’s what the label says. When Michal Kosinski, adjunct Professor at Stanford University, tested last year whether the first language models could handle this task, the result was negative. GPT-1 and 2 both answered incorrectly. But then he tried the next generation of the model, GPT-3. And in 40% of cases, it managed this type of task. GPT-3.5 managed it in 90% of cases and GPT-4 in 95% of cases.1

Emergent Capabilities of Large Language Models

This capability came as a surprise, as nothing had been done to build theory of mind capability into the models. They simply acquired it on their own as they grew larger and as the volume of data they were trained on increased. That this could happen is based on the models’ ability to use language, says Kosinski.

Another example I stumbled upon myself by chance recently was when GPT-4 asked me, after I had posed a puzzle to it, whether I had tried to solve the puzzle myself. The models certainly ask questions all the time, that’s nothing new, they aim to get more precise instructions. But this question is of a different nature. I answered yes and also mentioned that this was the first time I had received a question of this kind from the model. “Yes, you are observant,” GPT-4 replied, “with this I am trying to make the conversation more natural.”

Does this new development mean that the artificial intelligence truly puts itself in the mind of others? Does it mean it thinks, that it has feelings, opinions, an interest in the viewpoints and experiences of others? Of course, we can’t draw that conclusion. But what this means is that the behavior of the models is becoming increasingly similar to how we use language when we interact with each other. In this sense, we could actually talk about the mind of an AI model, just as we use theory of mind to infer about the minds of other humans.

The Power of Language

The language models draw our attention to the importance of language and how it underpins our societies and our existence. We now have a technology that is increasingly adept at using language, which has the advantage of possessing vastly more knowledge than any individual could possibly acquire in a lifetime and which can perform tasks much faster. We can use this technology to greatly enhance our own productivity, our reasoning, and our decisions if we use it correctly. This way, we can use it to gain more leisure time and improve our quality of life.

The comparison to the discovery of electricity is apt. Some might even want to go further and liken this revolution to the advent of language itself, which could be supported by pointing to the spontaneous capabilities of the models, such as theory of mind, which they achieve through nothing but the very ability to use language. What happens then if they evolve further than us, and could that possibly happen?

The fact that artificial intelligence has mastered language is a revolution that will lead to fundamental changes in society. The challenge we now face, each and every one of us, is to use it in a structured way, to our advantage, and avoid the pitfall of outsourcing our own thinking and decisions to it. The best way to do this is to enhance our own understanding of language, our expressive ability, and our critical thinking skills.

 

Thorsteinn Siglaugsson

 


  1. Kosinski, Michal: Theory of Mind May Have Spontaneously Emerged in Large Language Models, Stanford 2023. https://stanford.io/4aQosLV

Thorsteinn Siglaugsson is a Icelandic economist, consultant and writer. Chairman of the Icelandic Free Speech Society. Author: "From Symptoms to Causes" (Amazon). Regular contributor to The Daily Sceptic, Conservative Woman and Brownstone Institute. Siglaugsson also writes on Substack.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

OpenAI may develop AI weapons for the Pentagon

The future of AI

Published 14 April 2025
– By Editorial Staff
Sam Altman's OpenAI is already working with defense technology company Anduril Industries.

OpenAI CEO Sam Altman, does not rule out that his and his company will help the Pentagon develop new AI-based weapon systems in the future.

– I will never say never, because the world could get really weird, the tech billionaire cryptically states.

The statement came during Thursday’s Vanderbilt Summit on Modern Conflict and Emerging Threat, and Altman added that he does not believe he will be working on developing weapons systems for the US military “in the foreseeable future” – unless it is deemed the best of several bad options.

– I don’t think most of the world wants AI making weapons decisions, he continued.

The fact that companies developing consumer technology are also developing military weapons has long been highly controversial – and in 2018, for example, led to widespread protests within Google’s own workforce, with many also choosing to leave voluntarily or being forced out by company management.

Believes in “exceptionally smart” systems before year-end

However, the AI industry in particular has shown a much greater willingness to enter into such agreements, and OpenAI has revised its policy on work related to “national security” in the past year. Among other things, it has publicly announced a partnership with defense technology company Anduril Industries Inc to develop anti-drone technology.

Altman also stressed the need for the US government to increase its expertise in AI.

– I don’t think AI adoption in the government has been as robust as possible, he said, adding that there will be “exceptionally smart” AI systems in operation ready before the end of the year.

Altman and Nakasone a retired four-star general attended the event ahead of the launch of OpenAI’s upcoming AI model, which is scheduled to be released next week. The audience included hundreds of representatives from intelligence agencies, the military and academia.

Swedish authors: Meta has stolen our books

The future of AI

Published 8 April 2025
– By Editorial Staff
Kajsa Gordon and Anna Ahlund are two of the authors who signed the open letter.

Meta has used Swedish books to train its AI models. Now authors are demanding compensation and calling on the Minister of Culture to act against the tech giant.

The magazine The Atlantic recently revealed that Meta used copyrighted works from authors around the world without permission or compensation. Swedish authors are also among them.

In an open letter, published in the Schibsted newspaper Aftonbladet, 53 Swedish children’s and young adult authors accuse Meta of copyright infringement.

Meta has vacuumed our books and used them as a basis for creating AI texts. They have also often used translations of our books to train their AI models in multiple languages. This copyright infringement is systematic”, the authors write.

Among the affected authors who signed the letter are Anna Ahlund, who had five works stolen, Kajsa Gordon, who had eight works stolen, and Pia Hagmar, who had 51 works stolen. The authors point out that there is a wide range of authors in fiction and non-fiction who have had several works stolen, and that many authors do not yet know that they have been affected.

“Our words are being exploited”

The authors are now calling on Sweden’s Minister of Culture Parisa Liljestrand to act against Meta and demand a license fee for the use of copyrighted texts.

We refuse to accept that our words are being exploited by a multi-billion dollar company without our consent or compensation”.

It also demands that Meta disclose which Swedish authors are used to train its AI model, and that authors have the right to deny the tech giant use of the texts.

The Swedish government often talks about strengthening children’s reading. A prerequisite for reading is that Swedish cultural policy makes it possible to be an author”, the authors conclude

What we know about the newly launched Grok 3

The future of AI

Published 20 February 2025
– By Editorial Staff

Elon Musk’s AI company xAI has launched the third-generation language model Grok 3, which the company says outperforms competitors such as ChatGPT and Google’s Gemini. During a live presentation, Musk claimed that the new model is “maximally truth-seeking” and ten times more capable than its predecessor.

Grok 3, trained using 100,000 Nvidia H100 GPUs at xAI’s Colossus Supercluster in Memphis, USA, is described as a milestone in artificial intelligence. According to xAI, the model has a unique ability to combine logical reasoning with extensive data processing, which was demonstrated by creating a game that mixes Tetris and Bejeweled and planning a complex space journey from Earth to Mars during the presentation. Musk emphasized that Grok 3 is designed to “favor truth over political correctness” – a direct criticism of competitors he considers too censored.

Technical capacity and competitiveness

According to data from xAI, Grok 3 has outperformed GPT-4o and Google’s Gemini in academic tests, including doctoral-level physics and biology. The model comes in two versions: the full-scale Grok 3 and the lighter Grok 3 mini, which prioritizes speed over accuracy. It also introduces the DeepSearch feature, an AI-powered search engine that compiles information from across the internet into coherent answers.

Early tests by experts such as Andrej Karpathy, former head of AI at Tesla, confirm that Grok 3 is at the forefront of logical thinking, but he also notes that the differences against competitors such as OpenAI’s o1-pro are marginal. Still, the development time is impressive: xAI built its supercomputer in eight months, compared to the industry standard of four years, according to Nvidia CEO Jensen Huang.

Availability and reviews

Grok 3 is first released to paying users of X (formerly Twitter) through the Premium+ subscription. A more expensive tier, SuperGrok, provides access to advanced features like unlimited image generation. However, Musk warned during the launch that the first version is a “beta” and may contain bugs – a call for patience.

Criticism of the launch has been harsh. Researchers and tech experts question xAI’s benchmark results, which they say are difficult to verify independently. Others point to risks of training AI on data from X, where misinformation and spam posts are common.

Some experts, such as AI researcher Findecanor, also criticize the name “Grok” – a term from science fiction describing deep understanding – saying it is misleading for a model that they say lacks genuine insight. In addition, Musk’s previous controversial statements about the potential dangers of AI have created skepticism about his own platform.

Vision for the future

Despite the criticism, xAI is betting big. The company plans to release Grok 2 as open source once Grok 3 is stabilized, which would allow community contributions to the technology. A voice feature and integrations for businesses via API are also in the works.

Meanwhile, a power struggle is underway in the AI industry. Musk recently tried to buy OpenAI for $97 billion, an offer rejected by CEO Sam Altman, who described it as an attempt to “destabilize” the competitor. With Grok 3, xAI is positioning itself as a key player in the global AI race – but the question is whether its promises can be fulfilled without increasing polarization around the ethics and trustworthiness of the technology.

US and UK back away from international AI declaration

The future of AI

Published 15 February 2025
– By Editorial Staff
US Vice President JD Vance stresses that “pro-growth AI policies” should take priority over security.

Sweden and 60 other countries have signed an AI declaration for inclusive, sustainable and open AI. However, the United States and the United Kingdom have chosen to opt out a decision that has provoked strong reactions.

The AI Declaration was developed in conjunction with the International AI Summit in Paris earlier this week, and its aim is to promote inclusive and sustainable AI in line with the Paris Agreement. It also emphasizes the importance of an “ethical” approach where technology should be “transparent”, “safe” and “trustworthy”.

The declaration also notes AI’s energy use, something not previously discussed. Experts have previously warned that in the future AI could consume as much energy as smaller countries.

Countries such as China, India and Mexico have signed the agreement. Finland, Denmark, Sweden and Norway have also signed. The United States and the United Kingdom are two of the countries that have chosen not to sign the agreement, reports the British state broadcaster BBC.

“Global governance”

The UK government justifies its decision with concerns about national security and “global governance”. US Vice President JD Vance has also previously said that too much regulation of AI could “kill a transformative industry just as it’s taking off”. At the meeting, Vance stressed that AI was “an opportunity that the Trump administration will not squander” and said that “pro-growth AI policies” should be prioritized over security.

French President Emmanuel Macron, for his part, defended the need for further regulation.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.