Tuesday, May 6, 2025

Polaris of Enlightenment

Ad:

Artificial Intelligence and the Power of Language

The future of AI

How the mastery of language may be driving emergent abilities in Large Language Models, and what this means.

Published 7 May 2024
– By Thorsteinn Siglaugsson

This is an opinion piece. The author is responsible for the views expressed in the article.

A few days ago, Jamie Dimon, CEO of JPMorgan Chase, said that the advent of artificial intelligence could be likened to the discovery of electricity, so profound would be the societal changes it brings about. Artificial intelligence is certainly nothing new in banking; it has been used for decades. However, what is driving the discussion about the impact of artificial intelligence now is the emergence of Large Language models like ChatGPT. This is the major change, not only in the corporate world, but also in everyday life.

The Large Language models are unlike other AI tools in that they have mastered language; we can communicate with them in ordinary language. Thus, technical knowledge is no longer a prerequisite for using artificial intelligence in life and work; instead, expressive ability and understanding of language are key. But the development of these models and research into them also vividly remind us how language itself is the true prerequisite for human society.

Theory of Mind: Getting Into the Minds of Others

Large Language models function in a different way from normal software because they evolve and change without the developers and operators necessarily foreseeing those changes. The ability to put oneself in the mind of another person has generally been considered unique to humans. This ability, known in psychology as “theory of mind,” refers to an individual’s ability to formulate a “theory” about what another person’s mental world is like. This ability is fundamental to human society; without it, it’s hard to see how any society could thrive. Here’s a simple puzzle of this kind:

“There is a bag filled with popcorn. There is no chocolate in the bag. Yet the label on the bag says “chocolate” and not “popcorn.” Sam finds the bag. She had never seen the bag before. She cannot see what is inside the bag. She reads the label.”

The question is, what does she think is in the bag? Of course, the right answer is that Sam thinks there’s chocolate in the bag, because that’s what the label says. When Michal Kosinski, adjunct Professor at Stanford University, tested last year whether the first language models could handle this task, the result was negative. GPT-1 and 2 both answered incorrectly. But then he tried the next generation of the model, GPT-3. And in 40% of cases, it managed this type of task. GPT-3.5 managed it in 90% of cases and GPT-4 in 95% of cases.1

Emergent Capabilities of Large Language Models

This capability came as a surprise, as nothing had been done to build theory of mind capability into the models. They simply acquired it on their own as they grew larger and as the volume of data they were trained on increased. That this could happen is based on the models’ ability to use language, says Kosinski.

Another example I stumbled upon myself by chance recently was when GPT-4 asked me, after I had posed a puzzle to it, whether I had tried to solve the puzzle myself. The models certainly ask questions all the time, that’s nothing new, they aim to get more precise instructions. But this question is of a different nature. I answered yes and also mentioned that this was the first time I had received a question of this kind from the model. “Yes, you are observant,” GPT-4 replied, “with this I am trying to make the conversation more natural.”

Does this new development mean that the artificial intelligence truly puts itself in the mind of others? Does it mean it thinks, that it has feelings, opinions, an interest in the viewpoints and experiences of others? Of course, we can’t draw that conclusion. But what this means is that the behavior of the models is becoming increasingly similar to how we use language when we interact with each other. In this sense, we could actually talk about the mind of an AI model, just as we use theory of mind to infer about the minds of other humans.

The Power of Language

The language models draw our attention to the importance of language and how it underpins our societies and our existence. We now have a technology that is increasingly adept at using language, which has the advantage of possessing vastly more knowledge than any individual could possibly acquire in a lifetime and which can perform tasks much faster. We can use this technology to greatly enhance our own productivity, our reasoning, and our decisions if we use it correctly. This way, we can use it to gain more leisure time and improve our quality of life.

The comparison to the discovery of electricity is apt. Some might even want to go further and liken this revolution to the advent of language itself, which could be supported by pointing to the spontaneous capabilities of the models, such as theory of mind, which they achieve through nothing but the very ability to use language. What happens then if they evolve further than us, and could that possibly happen?

The fact that artificial intelligence has mastered language is a revolution that will lead to fundamental changes in society. The challenge we now face, each and every one of us, is to use it in a structured way, to our advantage, and avoid the pitfall of outsourcing our own thinking and decisions to it. The best way to do this is to enhance our own understanding of language, our expressive ability, and our critical thinking skills.

 

Thorsteinn Siglaugsson

 


  1. Kosinski, Michal: Theory of Mind May Have Spontaneously Emerged in Large Language Models, Stanford 2023. https://stanford.io/4aQosLV

Thorsteinn Siglaugsson is a Icelandic economist, consultant and writer. Chairman of the Icelandic Free Speech Society. Author: "From Symptoms to Causes" (Amazon). Regular contributor to The Daily Sceptic, Conservative Woman and Brownstone Institute. Siglaugsson also writes on Substack.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Researchers: Soon impossible to detect AI deepfakes

The future of AI

Published 2 May 2025
– By Editorial Staff
Already today, it can be difficult to distinguish manipulated images and videos from the real thing - and soon it may become virtually impossible.

The most advanced AI-generated deepfake movies can now fake people’s heartbeats so convincingly that even specially developed detectors are fooled, according to a new study from Humboldt-Universität in Berlin.

The researchers’ findings raise concerns that technological developments may soon make manipulated material indistinguishable from authentic images and movies.

In the past, detectors have used a special method (remote photoplethysmography, or rPPG) that analyzes tiny color changes in the skin to detect a pulse – as a relatively reliable indicator of whether a film clip is genuine or not.

But in the new study, the researchers created 32 deepfake movies that not only looked real to human eyes, but also imitated pulse beats. When these videos were tested against an rPPG-based detector, they were incorrectly classified as genuine.

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect”, said Professor Peter Eisert, the study’s lead author, in a statement.

Increases risk of fraud

According to the study, pulse signals from original videos can be “inherited” by deepfakes in that AI models replicate slight variations in skin tone and blood flow and heat maps in the study showed near-identical light changes in both genuine and manipulated videos.

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video”, Eisert further explains,

These advances increase the risks of deepfake’s use in financial fraud, disinformation and non-consensual pornography, among others. In 2023, an independent researcher estimated that over 244,000 manipulated videos were uploaded to the 35 largest deepfake pornography sites in a single week.

Technical arms race

Despite the study’s worrying results, there is some hope of reversing the trend. The researchers note that today’s deepfakes still fail to replicate natural variations in blood flow over time. In addition, tech giants such as Adobe and Google are developing watermarks to mark AI-generated material.

Meanwhile, the US Congress recently passed the Take It Down Act, which criminalizes the dissemination of non-consensual sexual images – including AI-generated ones. But experts warn that the technological arms race between creators and detectors requires constant adaptation.

This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms”, the study points out, noting that as AI development accelerates, the fight against digital fraud is also becoming more urgent.

Others have raised a very different concern – that the widespread proliferation of AI-engineered material could be used as a pretext for tougher censorship and laws that restrict people’s freedom online in various ways.

Google develops AI to communicate with dolphins

The future of AI

Published 26 April 2025
– By Editorial Staff

Google has developed a new AI model to communicate with dolphins. The AI model, named DolphinGemma, is designed to interpret and recreate dolphins’ complex sound signals.

Dolphins are known as some of the world’s most communicative animals, and their social interactions are so advanced that researchers at the Wild Dolphin Project (WDP) have spent over 40 years studying them.

In particular, a dolphin population in the Bahamas has been documented for decades through audio recordings and video footage, where researchers have been able to link specific sounds to behaviors such as mating rituals, conflicts, and even individual names.

The ability to communicate with dolphins has long fascinated researchers, but until now the technology to analyze and mimic their sounds has been lacking. However, breakthroughs in AI language models have raised new hopes, and a collaboration between Google, Georgia Institute of Technology and WDP has produced DolphinGemma.

The goal: Common vocabulary between humans and animals

The model is based on the same technology as Google’s Gemini system and works basically like a language model – similar to ChatGPT – but trained for dolphin sounds. It receives whistles, clicks and pulses, analyzes them and predicts what is likely to come next. In practice, it connects to a CHAT system installed on modified Google Pixel phones.

The aim of the project is not to translate the dolphins’ language in its entirety, but rather to establish a basic common vocabulary between humans and animals. In the coming months, the model will be tested in the field, where researchers will try to teach the dolphins synthetic whistles linked to their favorite objects, such as seagrass and seaweed. Specifically, the ambition is for the dolphins themselves to be able to “ask for” what they want to play with, reports Popular Science.

 

NATO implements AI system for military operations

The future of AI

Published 17 April 2025
– By Editorial Staff
Modern warfare increasingly resembles what only a few years ago was science fiction.

The military pact NATO has entered into an agreement with the American tech company Palantir to introduce the AI-powered system Maven Smart System (MSS) in its military operations.

The Nordic Times has previously highlighted Palantir’s founder Peter Thiel and his influence over the circle around Trump, and how the company’s AI technology has been used to develop drones that can identify Russians and automate killing.

NATO announced on April 14 that it has signed a contract with Palantir Technologies to implement the Maven Smart System (MSS NATO), within the framework of Allied Command Operations, reports DefenceScoop.

MSS NATO uses generative AI and machine learning to quickly process information, and the system is designed to provide a sharper situational awareness by analyzing large amounts of data in real time.

This ranges from satellite imagery to intelligence reports, which are then used to identify targets and plan operations.

Terminator
In the “Terminator” movies, the remnants of the Earth’s population fight against the AI-controlled Skynet weapon system.

Modernizing warfare

According to the NATO Communications Agency NCIA, the aim is to modernize warfare capabilities. What used to require hundreds of intelligence analysts can now, with the help of MSS, be handled by a small group of 20-50 soldiers, according to the NCIA.

Palantir has previously supplied similar technology to the US Army, Air Force and Space Force. In September 2024, the company also signed a $100 million contract with the US military to expand the use of AI in targeting.

The system is expected to be operational as early as mid-May 2025.

The new deal has also caused financial markets to react and Palantir’s stock has risen. The company has also generally seen strong growth in recent years, with revenues increasing by 50% between 2022 and 2024.

Criticism and concerns

Palantir has previously been criticized for its cooperation with the Israeli Defense Forces, which led a major Nordic investor to cancel its involvement in the company. Criticisms include the risk of AI technology being used in ways that could violate human rights, especially in conflict zones.

On social media, the news has provoked mixed reactions. Mario Nawfal, a well-known voice on platform X, wrote in a post that “NATO goes full Skynet”, …referring to the fictional AI system in the Terminator movies, where technology takes control of the world.

Several critics express concerns about the implications of technology, while others see it as a necessary step to counter modern threats.

NATO and Palantir emphasize that technology does not replace human decision-making. They emphasize that the system is designed to support military leaders and not to act independently.

Nevertheless, there is a growing debate and concern about how AI’s role in warfare could affect future conflicts and global security. Some analysts also see the use of US technologies such as MSS as a way for NATO to strengthen ties across the Atlantic.

OpenAI may develop AI weapons for the Pentagon

The future of AI

Published 14 April 2025
– By Editorial Staff
Sam Altman's OpenAI is already working with defense technology company Anduril Industries.

OpenAI CEO Sam Altman, does not rule out that his and his company will help the Pentagon develop new AI-based weapon systems in the future.

– I will never say never, because the world could get really weird, the tech billionaire cryptically states.

The statement came during Thursday’s Vanderbilt Summit on Modern Conflict and Emerging Threat, and Altman added that he does not believe he will be working on developing weapons systems for the US military “in the foreseeable future” – unless it is deemed the best of several bad options.

– I don’t think most of the world wants AI making weapons decisions, he continued.

The fact that companies developing consumer technology are also developing military weapons has long been highly controversial – and in 2018, for example, led to widespread protests within Google’s own workforce, with many also choosing to leave voluntarily or being forced out by company management.

Believes in “exceptionally smart” systems before year-end

However, the AI industry in particular has shown a much greater willingness to enter into such agreements, and OpenAI has revised its policy on work related to “national security” in the past year. Among other things, it has publicly announced a partnership with defense technology company Anduril Industries Inc to develop anti-drone technology.

Altman also stressed the need for the US government to increase its expertise in AI.

– I don’t think AI adoption in the government has been as robust as possible, he said, adding that there will be “exceptionally smart” AI systems in operation ready before the end of the year.

Altman and Nakasone a retired four-star general attended the event ahead of the launch of OpenAI’s upcoming AI model, which is scheduled to be released next week. The audience included hundreds of representatives from intelligence agencies, the military and academia.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.