Saturday, July 19, 2025

Polaris of Enlightenment

Artificial Intelligence and the Power of Language

The future of AI

How the mastery of language may be driving emergent abilities in Large Language Models, and what this means.

Published 7 May 2024
– By Thorsteinn Siglaugsson

4 minute read
This is an opinion piece. The author is responsible for the views expressed in the article.

A few days ago, Jamie Dimon, CEO of JPMorgan Chase, said that the advent of artificial intelligence could be likened to the discovery of electricity, so profound would be the societal changes it brings about. Artificial intelligence is certainly nothing new in banking; it has been used for decades. However, what is driving the discussion about the impact of artificial intelligence now is the emergence of Large Language models like ChatGPT. This is the major change, not only in the corporate world, but also in everyday life.

The Large Language models are unlike other AI tools in that they have mastered language; we can communicate with them in ordinary language. Thus, technical knowledge is no longer a prerequisite for using artificial intelligence in life and work; instead, expressive ability and understanding of language are key. But the development of these models and research into them also vividly remind us how language itself is the true prerequisite for human society.

Theory of Mind: Getting Into the Minds of Others

Large Language models function in a different way from normal software because they evolve and change without the developers and operators necessarily foreseeing those changes. The ability to put oneself in the mind of another person has generally been considered unique to humans. This ability, known in psychology as “theory of mind,” refers to an individual’s ability to formulate a “theory” about what another person’s mental world is like. This ability is fundamental to human society; without it, it’s hard to see how any society could thrive. Here’s a simple puzzle of this kind:

“There is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”

The question is, what does she think is in the bag? Of course, the right answer is that Sam thinks there’s chocolate in the bag, because that’s what the label says. When Michal Kosinski, adjunct Professor at Stanford University, tested last year whether the first language models could handle this task, the result was negative. GPT-1 and 2 both answered incorrectly. But then he tried the next generation of the model, GPT-3. And in 40% of cases, it managed this type of task. GPT-3.5 managed it in 90% of cases and GPT-4 in 95% of cases.1

Emergent Capabilities of Large Language Models

This capability came as a surprise, as nothing had been done to build theory of mind capability into the models. They simply acquired it on their own as they grew larger and as the volume of data they were trained on increased. That this could happen is based on the models’ ability to use language, says Kosinski.

Another example I stumbled upon myself by chance recently was when GPT-4 asked me, after I had posed a puzzle to it, whether I had tried to solve the puzzle myself. The models certainly ask questions all the time, that’s nothing new, they aim to get more precise instructions. But this question is of a different nature. I answered yes and also mentioned that this was the first time I had received a question of this kind from the model. “Yes, you are observant,” GPT-4 replied, “with this I am trying to make the conversation more natural.”

Does this new development mean that the artificial intelligence truly puts itself in the mind of others? Does it mean it thinks, that it has feelings, opinions, an interest in the viewpoints and experiences of others? Of course, we can’t draw that conclusion. But what this means is that the behavior of the models is becoming increasingly similar to how we use language when we interact with each other. In this sense, we could actually talk about the mind of an AI model, just as we use theory of mind to infer about the minds of other humans.

The Power of Language

The language models draw our attention to the importance of language and how it underpins our societies and our existence. We now have a technology that is increasingly adept at using language, which has the advantage of possessing vastly more knowledge than any individual could possibly acquire in a lifetime and which can perform tasks much faster. We can use this technology to greatly enhance our own productivity, our reasoning, and our decisions if we use it correctly. This way, we can use it to gain more leisure time and improve our quality of life.

The comparison to the discovery of electricity is apt. Some might even want to go further and liken this revolution to the advent of language itself, which could be supported by pointing to the spontaneous capabilities of the models, such as theory of mind, which they achieve through nothing but the very ability to use language. What happens then if they evolve further than us, and could that possibly happen?

The fact that artificial intelligence has mastered language is a revolution that will lead to fundamental changes in society. The challenge we now face, each and every one of us, is to use it in a structured way, to our advantage, and avoid the pitfall of outsourcing our own thinking and decisions to it. The best way to do this is to enhance our own understanding of language, our expressive ability, and our critical thinking skills.

 

Thorsteinn Siglaugsson

 


  1. Kosinski, Michal: Theory of Mind May Have Spontaneously Emerged in Large Language Models, Stanford 2023. https://stanford.io/4aQosLV

Thorsteinn Siglaugsson is a Icelandic economist, consultant and writer. Chairman of the Icelandic Free Speech Society. Author: "From Symptoms to Causes" (Amazon). Regular contributor to The Daily Sceptic, Conservative Woman and Brownstone Institute. Siglaugsson also writes on Substack.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Pentagon purchases Musk’s politically incorrect AI models

The future of AI

Published 15 July 2025
– By Editorial Staff
1 minute read

Despite the deep rift with Trump, Elon Musk is now receiving a contract with the Pentagon worth up to $200 million to deliver specially adapted language models for the US military.

The project is called “Grok for Government” in a statement on X, by X.

Grok’s new AI model has been a major topic of conversation this past week, in establishment media primarily because after an update where certain filters were removed, it began breaking strongly against politically correct patterns, and among the general public due to the humor perceived in this.

Among other things, it has been noted how the chatbot writes that certain Jewish organizations, particularly the far-right group ADL, pursue a hostile line against European ethnic groups. For this, the chatbot has been accused of “antisemitism”.

American media analyst and political commentator Mark Dice on the controversy surrounding Grok’s new versions.

However, the criticism has apparently not prevented the US military from procuring Grok solutions for their purposes.

Nvidia becomes first company to reach four trillion dollars in market value

The future of AI

Published 10 July 2025
– By Editorial Staff
NVIDIA founder and CEO Jensen Huang presents DGX Spark – the world's smallest AI supercomputer.
2 minute read

Graphics card giant Nvidia made history on Wednesday when the company became the first publicly traded company ever to exceed four trillion dollars in market value. The milestone was reached when the stock rose to $164.42 during trading on July 9.

The California-based tech company has experienced a meteoric rise driven by its dominant position in AI chip manufacturing. Over the past five years, the stock has risen by a full 1,460 percent, while this year’s increase stands at nearly 18 percent.

Nvidia’s success is based on the company’s near-monopolistic control over the market for AI processors. The company’s GPU chips form the backbone of machine learning, data centers, and large language models like ChatGPT.

The company’s chips have become indispensable for tech giants Microsoft, Amazon, Meta, and Alphabet, all of which are investing billions in AI infrastructure. This has made Nvidia one of the main winners in the ongoing AI revolution.

Jensen Huang’s wealth explodes

The stock surge has had a dramatic impact on co-founder and CEO Jensen Huang’s personal wealth. According to Bloomberg estimates, his net worth is now $142 billion, an increase of more than $25 billion this year alone.

Huang owns approximately 3.5 percent of Nvidia, making him the company’s largest individual shareholder. The wealth increase places him among the world’s ten richest people, with his fortune closely tied to Nvidia’s stock price.

Heaviest weight in S&P 500

Nvidia now has the highest weighting in the broad US stock index S&P 500, having surpassed both Apple and Microsoft. The breakthrough has led to optimism for continued growth, with some analysts predicting that the market value could rise further.

Loop Capital’s Ananda Baruah sees Nvidia at the “forefront” of the next “golden wave” for generative AI and estimates that the company could reach a market value of over six trillion dollars within a few years.

Nvidia’s historic success reflects the broader AI euphoria that has gripped financial markets, where investors are betting that artificial intelligence will reshape the entire economy over the coming decades.

Musk launches Grok 4 – takes the lead as world’s strongest AI model

The future of AI

Published 10 July 2025
– By Editorial Staff
Elon Musk speaks during the press conference alongside developers from xAI.
3 minute read

Elon Musk’s AI company xAI presented its latest AI model Grok 4 on Wednesday, along with a monthly fee of $300 for access to the premium version. The launch comes amid a turbulent period for Musk’s companies, as X CEO Linda Yaccarino has left her position and the Grok system, which lacks politically correct safeguards, has made controversial comments.

xAI took the step into the next generation on Wednesday evening with Grok 4, the company’s most advanced AI model to date. At the same time, a premium service called SuperGrok Heavy was introduced with a monthly fee of $300 – the most expensive AI subscription among major providers in the market.

Grok 4 is positioned as xAI’s direct competitor to established AI models like OpenAI’s ChatGPT and Google’s Gemini. The model can analyze images and answer complex questions, and has been increasingly integrated into Musk’s social network X over recent months, where xAI recently acquired significant ownership stakes.

Musk: “Better than PhD level”

During a livestream on Wednesday evening, Musk made bold claims about the new model’s capabilities.

“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions”, Musk claimed. However, he acknowledged that the model can sometimes lack common sense and has not yet invented new technologies or discovered new physics – “but that is just a matter of time”.

Expectations for Grok 4 are high ahead of the upcoming competition with OpenAI’s anticipated GPT-5, which is expected to launch later this summer.

Launch during turbulent week

The launch comes during a tumultuous period for Musk’s business empire. Earlier on Wednesday, Linda Yaccarino announced that she is leaving her position as CEO of X after approximately two years in the role. No successor has yet been appointed.

Yaccarino’s departure comes just days after Grok’s official, automated X account made controversial comments criticizing Hollywood’s “Jewish executives” and other politically incorrect statements. xAI was forced to temporarily restrict the account’s activity and delete the posts. In response to the incident, xAI appears to have removed a recently added section from Grok’s public system instructions that encouraged the AI not to shy away from “politically incorrect” statements.

Musk wore his customary leather jacket and sat alongside xAI leaders during the Grok 4 launch. Photo: xAI

Two model versions with top performance

xAI launched two variants: Grok 4 and Grok 4 Heavy – the latter described as the company’s “multi-agent version” with improved performance. According to Musk, Grok 4 Heavy creates multiple AI agents that work simultaneously on a problem and then compare their results “like a study group” to find the best answer.

The company claims that Grok 4 demonstrates top performance across several test areas, including “Humanity’s Last Exam” – a demanding test that measures AI’s ability to answer thousands of questions in mathematics, humanities, and natural sciences. According to xAI, Grok 4 achieved a score of 25.4 percent without “tools,” surpassing Google’s Gemini 2.5 Pro (21.6 percent) and OpenAI’s o3 high (21 percent).

With access to tools, Grok 4 Heavy allegedly achieved 44.4 percent, compared to Gemini 2.5 Pro’s 26.9 percent.

Future products on the way

SuperGrok Heavy subscribers get early access to Grok 4 Heavy as well as upcoming features. xAI announced that the company plans to launch an AI coding model in August, a multimodal agent in September, and a video generation model in October.

The company is also making Grok 4 available through its API to attract developers to build applications with the model, despite the enterprise initiative being only two months old.

Whether companies are ready to adopt Grok despite the recent mishap remains to be seen, as xAI attempts to establish itself as a credible competitor to ChatGPT, Claude, and Gemini in the enterprise market.

The Grok service can now be accessed outside the X platform through Grok.com.

Spotify fills playlists with fake music – while CEO invests millions in military AI

The future of AI

Published 1 July 2025
– By Editorial Staff
Spotify CEO Daniel Ek accused of diverting artist royalties to military AI development.
3 minute read

Swedish streaming giant Spotify promotes anonymous pseudo-musicians and computer-generated music to avoid paying royalties to real artists, according to a new book by music journalist Liz Pelly.

Meanwhile, criticism grows against Spotify CEO Daniel Ek, who recently invested over €600 million in a company developing AI technology for future warfare.

In the book Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, Liz Pelly reveals that Spotify has long been running a secret internal program called Perfect Fit Content (PFC). The program creates cheap, generic background music – often called “muzak” – through a network of production companies with ties to Spotify. This music is then placed in Spotify’s popular playlists, often without crediting any real artists.

The program was tested as early as 2010 and is described by Pelly as Spotify’s most profitable strategy since 2017.

“But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which – as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler – the relationship between listener and artist might be severed completely”, Pelly writes.

By 2023, the PFC program controlled hundreds of playlists. More than 150 of them – with names like Deep Focus, Cocktail Jazz, and Morning Stretch – consisted entirely of music produced within PFC.

“Only soulless AI music will remain”

A jazz musician told Pelly that Spotify asked him to create an ambient track for a few hundred dollars as a one-time payment. However, he couldn’t retain the rights to the music. When the track later received millions of plays, he realized he had likely been deceived.

Social media criticism has been harsh. One user writes: “In a few years, only soulless AI music will remain. It’s an easy way to avoid paying royalties to anyone.”

“I deleted Spotify and cancelled my subscription”, comments another.

Spotify has previously faced criticism for similar practices. The Guardian reported in February that the company’s Discovery Mode system allows artists to gain more visibility – but only if they agree to receive 30 percent less payment.

Spotify’s CEO invests in AI for warfare

Meanwhile, CEO Daniel Ek has faced severe criticism for investing over €600 million through his investment firm Prima Materia in the German AI company Helsing. The company develops software for drones, fighter aircraft, submarines, and other military systems.

– The world is being tested in more ways than ever before. That has sped up the timeline. There’s an enormous realisation that it’s really now AI, mass and autonomy that is driving the new battlefield, Ek commented in an interview with Financial Times.

With this investment, Ek has also become chairman of Helsing. The company is working on a project called Centaur, where artificial intelligence will be used to control fighter aircraft.

The criticism was swift. Australian producer Bluescreen explained in an interview with music site Resident Advisor why he chose to leave Spotify – a decision several other music creators have also made.

– War is hell. There’s nothing ethical about it, no matter how you spin it. I also left because it became apparent very quickly that Spotify’s CEO, as all billionaires, only got rich off the exploitation of others.

Competitor chooses different path

Spotify has previously been questioned for its proximity to political power. The company donated $150,000 to Donald Trump’s inauguration fund in 2017 and hosted an exclusive brunch the day before the ceremony.

While Spotify is heavily investing in AI-generated music and voice-controlled DJs, competitor SoundCloud has chosen a different path.

– We do not develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes, explains communications director Marni Greenberg.

– In fact, we implemented technical safeguards, including a ‘no AI’ tag on our site to explicitly prohibit unauthorised use.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.