Friday, May 2, 2025

Polaris of Enlightenment

Ad:

Researchers: Soon impossible to detect AI deepfakes

The future of AI

Published today 7:18
– By Editorial Staff
Already today, it can be difficult to distinguish manipulated images and videos from the real thing - and soon it may become virtually impossible.

The most advanced AI-generated deepfake movies can now fake people’s heartbeats so convincingly that even specially developed detectors are fooled, according to a new study from Humboldt-Universität in Berlin.

The researchers’ findings raise concerns that technological developments may soon make manipulated material indistinguishable from authentic images and movies.

In the past, detectors have used a special method (remote photoplethysmography, or rPPG) that analyzes tiny color changes in the skin to detect a pulse – as a relatively reliable indicator of whether a film clip is genuine or not.

But in the new study, the researchers created 32 deepfake movies that not only looked real to human eyes, but also imitated pulse beats. When these videos were tested against an rPPG-based detector, they were incorrectly classified as genuine.

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect”, said Professor Peter Eisert, the study’s lead author, in a statement.

Increases risk of fraud

According to the study, pulse signals from original videos can be “inherited” by deepfakes in that AI models replicate slight variations in skin tone and blood flow and heat maps in the study showed near-identical light changes in both genuine and manipulated videos.

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video”, Eisert further explains,

These advances increase the risks of deepfake’s use in financial fraud, disinformation and non-consensual pornography, among others. In 2023, an independent researcher estimated that over 244,000 manipulated videos were uploaded to the 35 largest deepfake pornography sites in a single week.

Technical arms race

Despite the study’s worrying results, there is some hope of reversing the trend. The researchers note that today’s deepfakes still fail to replicate natural variations in blood flow over time. In addition, tech giants such as Adobe and Google are developing watermarks to mark AI-generated material.

Meanwhile, the US Congress recently passed the Take It Down Act, which criminalizes the dissemination of non-consensual sexual images – including AI-generated ones. But experts warn that the technological arms race between creators and detectors requires constant adaptation.

This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms”, the study points out, noting that as AI development accelerates, the fight against digital fraud is also becoming more urgent.

Others have raised a very different concern – that the widespread proliferation of AI-engineered material could be used as a pretext for tougher censorship and laws that restrict people’s freedom online in various ways.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Google develops AI to communicate with dolphins

The future of AI

Published 26 April 2025
– By Editorial Staff

Google has developed a new AI model to communicate with dolphins. The AI model, named DolphinGemma, is designed to interpret and recreate dolphins’ complex sound signals.

Dolphins are known as some of the world’s most communicative animals, and their social interactions are so advanced that researchers at the Wild Dolphin Project (WDP) have spent over 40 years studying them.

In particular, a dolphin population in the Bahamas has been documented for decades through audio recordings and video footage, where researchers have been able to link specific sounds to behaviors such as mating rituals, conflicts, and even individual names.

The ability to communicate with dolphins has long fascinated researchers, but until now the technology to analyze and mimic their sounds has been lacking. However, breakthroughs in AI language models have raised new hopes, and a collaboration between Google, Georgia Institute of Technology and WDP has produced DolphinGemma.

The goal: Common vocabulary between humans and animals

The model is based on the same technology as Google’s Gemini system and works basically like a language model – similar to ChatGPT – but trained for dolphin sounds. It receives whistles, clicks and pulses, analyzes them and predicts what is likely to come next. In practice, it connects to a CHAT system installed on modified Google Pixel phones.

The aim of the project is not to translate the dolphins’ language in its entirety, but rather to establish a basic common vocabulary between humans and animals. In the coming months, the model will be tested in the field, where researchers will try to teach the dolphins synthetic whistles linked to their favorite objects, such as seagrass and seaweed. Specifically, the ambition is for the dolphins themselves to be able to “ask for” what they want to play with, reports Popular Science.

 

NATO implements AI system for military operations

The future of AI

Published 17 April 2025
– By Editorial Staff
Modern warfare increasingly resembles what only a few years ago was science fiction.

The military pact NATO has entered into an agreement with the American tech company Palantir to introduce the AI-powered system Maven Smart System (MSS) in its military operations.

The Nordic Times has previously highlighted Palantir’s founder Peter Thiel and his influence over the circle around Trump, and how the company’s AI technology has been used to develop drones that can identify Russians and automate killing.

NATO announced on April 14 that it has signed a contract with Palantir Technologies to implement the Maven Smart System (MSS NATO), within the framework of Allied Command Operations, reports DefenceScoop.

MSS NATO uses generative AI and machine learning to quickly process information, and the system is designed to provide a sharper situational awareness by analyzing large amounts of data in real time.

This ranges from satellite imagery to intelligence reports, which are then used to identify targets and plan operations.

Terminator
In the “Terminator” movies, the remnants of the Earth’s population fight against the AI-controlled Skynet weapon system.

Modernizing warfare

According to the NATO Communications Agency NCIA, the aim is to modernize warfare capabilities. What used to require hundreds of intelligence analysts can now, with the help of MSS, be handled by a small group of 20-50 soldiers, according to the NCIA.

Palantir has previously supplied similar technology to the US Army, Air Force and Space Force. In September 2024, the company also signed a $100 million contract with the US military to expand the use of AI in targeting.

The system is expected to be operational as early as mid-May 2025.

The new deal has also caused financial markets to react and Palantir’s stock has risen. The company has also generally seen strong growth in recent years, with revenues increasing by 50% between 2022 and 2024.

Criticism and concerns

Palantir has previously been criticized for its cooperation with the Israeli Defense Forces, which led a major Nordic investor to cancel its involvement in the company. Criticisms include the risk of AI technology being used in ways that could violate human rights, especially in conflict zones.

On social media, the news has provoked mixed reactions. Mario Nawfal, a well-known voice on platform X, wrote in a post that “NATO goes full Skynet”, …referring to the fictional AI system in the Terminator movies, where technology takes control of the world.

Several critics express concerns about the implications of technology, while others see it as a necessary step to counter modern threats.

NATO and Palantir emphasize that technology does not replace human decision-making. They emphasize that the system is designed to support military leaders and not to act independently.

Nevertheless, there is a growing debate and concern about how AI’s role in warfare could affect future conflicts and global security. Some analysts also see the use of US technologies such as MSS as a way for NATO to strengthen ties across the Atlantic.

OpenAI may develop AI weapons for the Pentagon

The future of AI

Published 14 April 2025
– By Editorial Staff
Sam Altman's OpenAI is already working with defense technology company Anduril Industries.

OpenAI CEO Sam Altman, does not rule out that his and his company will help the Pentagon develop new AI-based weapon systems in the future.

– I will never say never, because the world could get really weird, the tech billionaire cryptically states.

The statement came during Thursday’s Vanderbilt Summit on Modern Conflict and Emerging Threat, and Altman added that he does not believe he will be working on developing weapons systems for the US military “in the foreseeable future” – unless it is deemed the best of several bad options.

– I don’t think most of the world wants AI making weapons decisions, he continued.

The fact that companies developing consumer technology are also developing military weapons has long been highly controversial – and in 2018, for example, led to widespread protests within Google’s own workforce, with many also choosing to leave voluntarily or being forced out by company management.

Believes in “exceptionally smart” systems before year-end

However, the AI industry in particular has shown a much greater willingness to enter into such agreements, and OpenAI has revised its policy on work related to “national security” in the past year. Among other things, it has publicly announced a partnership with defense technology company Anduril Industries Inc to develop anti-drone technology.

Altman also stressed the need for the US government to increase its expertise in AI.

– I don’t think AI adoption in the government has been as robust as possible, he said, adding that there will be “exceptionally smart” AI systems in operation ready before the end of the year.

Altman and Nakasone a retired four-star general attended the event ahead of the launch of OpenAI’s upcoming AI model, which is scheduled to be released next week. The audience included hundreds of representatives from intelligence agencies, the military and academia.

Swedish authors: Meta has stolen our books

The future of AI

Published 8 April 2025
– By Editorial Staff
Kajsa Gordon and Anna Ahlund are two of the authors who signed the open letter.

Meta has used Swedish books to train its AI models. Now authors are demanding compensation and calling on the Minister of Culture to act against the tech giant.

The magazine The Atlantic recently revealed that Meta used copyrighted works from authors around the world without permission or compensation. Swedish authors are also among them.

In an open letter, published in the Schibsted newspaper Aftonbladet, 53 Swedish children’s and young adult authors accuse Meta of copyright infringement.

Meta has vacuumed our books and used them as a basis for creating AI texts. They have also often used translations of our books to train their AI models in multiple languages. This copyright infringement is systematic”, the authors write.

Among the affected authors who signed the letter are Anna Ahlund, who had five works stolen, Kajsa Gordon, who had eight works stolen, and Pia Hagmar, who had 51 works stolen. The authors point out that there is a wide range of authors in fiction and non-fiction who have had several works stolen, and that many authors do not yet know that they have been affected.

“Our words are being exploited”

The authors are now calling on Sweden’s Minister of Culture Parisa Liljestrand to act against Meta and demand a license fee for the use of copyrighted texts.

We refuse to accept that our words are being exploited by a multi-billion dollar company without our consent or compensation”.

It also demands that Meta disclose which Swedish authors are used to train its AI model, and that authors have the right to deny the tech giant use of the texts.

The Swedish government often talks about strengthening children’s reading. A prerequisite for reading is that Swedish cultural policy makes it possible to be an author”, the authors conclude

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.