Tuesday, May 6, 2025

Polaris of Enlightenment

Ad:

Are we building a new Tower of Babel?

The future of AI

The advent of artificial intelligence, which offers effortless translation, poses a risk to the authenticity and depth of human connection.

Published 10 March 2024
– By Thorsteinn Siglaugsson

This is an opinion piece. The author is responsible for the views expressed in the article.

The other day I attended a conference hosted by an international software company. The presenters came from various countries, all speaking in English. English is the corporate language. But none of them were native English speakers, which was obvious. It made me think about how strange it is when people spend most of their time communicating in a language they don’t fully know and can never really master. All the nuances are lost, all the linguistic creativity, the ambiguity, the unspoken words, the hidden sarcasm, and the secret humour.

Shortly after, I was coaching one of my students, a German software specialist who speaks quite decent English, but still, we spent a lot of time figuring out exactly what he meant by a particular paragraph. I asked him if he couldn’t just tell me in German. “Do you speak German?” he asked. “Well, I learned it in high school,” I replied, “but it might be a stretch to say that means I know German.” Then we just laughed. Eventually, we managed to resolve the issue, in English, which neither of us speaks perfectly. Did we understand the paragraph in the same way? Surely not.

Being able to speak any language, but understand none

But now we have artificial intelligence. And whether we like it better or worse, AI is considerably better at English than almost every non-native speaker, and indeed a large part of those as well. The style is admittedly flat, but the same applies when people express themselves in a language they don’t fully master. Soon, we can expect technology to have generally reached the level where in phone calls and remote meetings we can simply speak our own mother tongue, and let artificial intelligence instantly translate the content into any other language.

AI is considerably better at English than almost every non-native speaker, and indeed a large part of those as well.

The Swede working in the international software company then simply speaks Swedish to the Ukrainian or Frenchman, and they just hear the Ukrainian or French version and respond in their own language. And when Elon Musk or those others now working hard to develop brain chips have progressed further, it might even be enough to press a button on the remote brain control we will soon all carry, to switch languages and speak French, Ukrainian, Swahili, or Hindi as needed. But of course, without understanding a word of what comes out of our mouths.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Researchers: Soon impossible to detect AI deepfakes

The future of AI

Published 2 May 2025
– By Editorial Staff
Already today, it can be difficult to distinguish manipulated images and videos from the real thing - and soon it may become virtually impossible.

The most advanced AI-generated deepfake movies can now fake people’s heartbeats so convincingly that even specially developed detectors are fooled, according to a new study from Humboldt-Universität in Berlin.

The researchers’ findings raise concerns that technological developments may soon make manipulated material indistinguishable from authentic images and movies.

In the past, detectors have used a special method (remote photoplethysmography, or rPPG) that analyzes tiny color changes in the skin to detect a pulse – as a relatively reliable indicator of whether a film clip is genuine or not.

But in the new study, the researchers created 32 deepfake movies that not only looked real to human eyes, but also imitated pulse beats. When these videos were tested against an rPPG-based detector, they were incorrectly classified as genuine.

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect”, said Professor Peter Eisert, the study’s lead author, in a statement.

Increases risk of fraud

According to the study, pulse signals from original videos can be “inherited” by deepfakes in that AI models replicate slight variations in skin tone and blood flow and heat maps in the study showed near-identical light changes in both genuine and manipulated videos.

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video”, Eisert further explains,

These advances increase the risks of deepfake’s use in financial fraud, disinformation and non-consensual pornography, among others. In 2023, an independent researcher estimated that over 244,000 manipulated videos were uploaded to the 35 largest deepfake pornography sites in a single week.

Technical arms race

Despite the study’s worrying results, there is some hope of reversing the trend. The researchers note that today’s deepfakes still fail to replicate natural variations in blood flow over time. In addition, tech giants such as Adobe and Google are developing watermarks to mark AI-generated material.

Meanwhile, the US Congress recently passed the Take It Down Act, which criminalizes the dissemination of non-consensual sexual images – including AI-generated ones. But experts warn that the technological arms race between creators and detectors requires constant adaptation.

This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms”, the study points out, noting that as AI development accelerates, the fight against digital fraud is also becoming more urgent.

Others have raised a very different concern – that the widespread proliferation of AI-engineered material could be used as a pretext for tougher censorship and laws that restrict people’s freedom online in various ways.

Google develops AI to communicate with dolphins

The future of AI

Published 26 April 2025
– By Editorial Staff

Google has developed a new AI model to communicate with dolphins. The AI model, named DolphinGemma, is designed to interpret and recreate dolphins’ complex sound signals.

Dolphins are known as some of the world’s most communicative animals, and their social interactions are so advanced that researchers at the Wild Dolphin Project (WDP) have spent over 40 years studying them.

In particular, a dolphin population in the Bahamas has been documented for decades through audio recordings and video footage, where researchers have been able to link specific sounds to behaviors such as mating rituals, conflicts, and even individual names.

The ability to communicate with dolphins has long fascinated researchers, but until now the technology to analyze and mimic their sounds has been lacking. However, breakthroughs in AI language models have raised new hopes, and a collaboration between Google, Georgia Institute of Technology and WDP has produced DolphinGemma.

The goal: Common vocabulary between humans and animals

The model is based on the same technology as Google’s Gemini system and works basically like a language model – similar to ChatGPT – but trained for dolphin sounds. It receives whistles, clicks and pulses, analyzes them and predicts what is likely to come next. In practice, it connects to a CHAT system installed on modified Google Pixel phones.

The aim of the project is not to translate the dolphins’ language in its entirety, but rather to establish a basic common vocabulary between humans and animals. In the coming months, the model will be tested in the field, where researchers will try to teach the dolphins synthetic whistles linked to their favorite objects, such as seagrass and seaweed. Specifically, the ambition is for the dolphins themselves to be able to “ask for” what they want to play with, reports Popular Science.

 

NATO implements AI system for military operations

The future of AI

Published 17 April 2025
– By Editorial Staff
Modern warfare increasingly resembles what only a few years ago was science fiction.

The military pact NATO has entered into an agreement with the American tech company Palantir to introduce the AI-powered system Maven Smart System (MSS) in its military operations.

The Nordic Times has previously highlighted Palantir’s founder Peter Thiel and his influence over the circle around Trump, and how the company’s AI technology has been used to develop drones that can identify Russians and automate killing.

NATO announced on April 14 that it has signed a contract with Palantir Technologies to implement the Maven Smart System (MSS NATO), within the framework of Allied Command Operations, reports DefenceScoop.

MSS NATO uses generative AI and machine learning to quickly process information, and the system is designed to provide a sharper situational awareness by analyzing large amounts of data in real time.

This ranges from satellite imagery to intelligence reports, which are then used to identify targets and plan operations.

Terminator
In the “Terminator” movies, the remnants of the Earth’s population fight against the AI-controlled Skynet weapon system.

Modernizing warfare

According to the NATO Communications Agency NCIA, the aim is to modernize warfare capabilities. What used to require hundreds of intelligence analysts can now, with the help of MSS, be handled by a small group of 20-50 soldiers, according to the NCIA.

Palantir has previously supplied similar technology to the US Army, Air Force and Space Force. In September 2024, the company also signed a $100 million contract with the US military to expand the use of AI in targeting.

The system is expected to be operational as early as mid-May 2025.

The new deal has also caused financial markets to react and Palantir’s stock has risen. The company has also generally seen strong growth in recent years, with revenues increasing by 50% between 2022 and 2024.

Criticism and concerns

Palantir has previously been criticized for its cooperation with the Israeli Defense Forces, which led a major Nordic investor to cancel its involvement in the company. Criticisms include the risk of AI technology being used in ways that could violate human rights, especially in conflict zones.

On social media, the news has provoked mixed reactions. Mario Nawfal, a well-known voice on platform X, wrote in a post that “NATO goes full Skynet”, …referring to the fictional AI system in the Terminator movies, where technology takes control of the world.

Several critics express concerns about the implications of technology, while others see it as a necessary step to counter modern threats.

NATO and Palantir emphasize that technology does not replace human decision-making. They emphasize that the system is designed to support military leaders and not to act independently.

Nevertheless, there is a growing debate and concern about how AI’s role in warfare could affect future conflicts and global security. Some analysts also see the use of US technologies such as MSS as a way for NATO to strengthen ties across the Atlantic.

OpenAI may develop AI weapons for the Pentagon

The future of AI

Published 14 April 2025
– By Editorial Staff
Sam Altman's OpenAI is already working with defense technology company Anduril Industries.

OpenAI CEO Sam Altman, does not rule out that his and his company will help the Pentagon develop new AI-based weapon systems in the future.

– I will never say never, because the world could get really weird, the tech billionaire cryptically states.

The statement came during Thursday’s Vanderbilt Summit on Modern Conflict and Emerging Threat, and Altman added that he does not believe he will be working on developing weapons systems for the US military “in the foreseeable future” – unless it is deemed the best of several bad options.

– I don’t think most of the world wants AI making weapons decisions, he continued.

The fact that companies developing consumer technology are also developing military weapons has long been highly controversial – and in 2018, for example, led to widespread protests within Google’s own workforce, with many also choosing to leave voluntarily or being forced out by company management.

Believes in “exceptionally smart” systems before year-end

However, the AI industry in particular has shown a much greater willingness to enter into such agreements, and OpenAI has revised its policy on work related to “national security” in the past year. Among other things, it has publicly announced a partnership with defense technology company Anduril Industries Inc to develop anti-drone technology.

Altman also stressed the need for the US government to increase its expertise in AI.

– I don’t think AI adoption in the government has been as robust as possible, he said, adding that there will be “exceptionally smart” AI systems in operation ready before the end of the year.

Altman and Nakasone a retired four-star general attended the event ahead of the launch of OpenAI’s upcoming AI model, which is scheduled to be released next week. The audience included hundreds of representatives from intelligence agencies, the military and academia.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.