Wednesday, April 23, 2025

Polaris of Enlightenment

Ad:

New technology removes PFAS substances

Published 30 October 2023
– By Editorial Staff
PFAS destruction unit Eleanor.

An American start-up company has successfully tested a device that can purify water contaminated with the persistent PFAS compounds, according to a report in GeekWire magazine.

PFAS are synthetically produced substances found in everything from food packaging to hygiene products; some of these substances can be harmful to both health and the environment. Because PFAS compounds are so difficult to break down, they are sometimes referred to as “forever chemicals”. These substances can leak from a range of products and contaminate, among other things, drinking water. They have also been detected in breast milk, and it has previously been challenging to find effective methods to remove them.

The American start-up Aquagga has now developed a PFAS destruction unit, affectionately named “Eleanor”, which was recently tested at Fairbanks Airport in Alaska. There, they purified tens of thousands of liters of wastewater that had been contaminated with PFAS for 40 years.

PFAS is incredibly tough to break down and deal with, Nigel Sharp, the company’s CEO, told GeekWire. So we’re very fortunate we’ve validated the technology works, and we’re now to the point of [going to] commercial scale and growth of the company.

The PFAS destruction unit incorporates technology developed at the University of Washington and the Colorado School of Mines. The device uses very high pressure and temperatures of up to about 300 degrees Celsius. Lye is then added, an ingredient found in soap, to create a corrosive environment. This is intended to break down the PFAS compounds by breaking the bond in its head, chopping up its backbone of carbon molecules, and cutting off the fluoride molecules that run along the spine.

Skeletal structure of PFOS, a type of PFAS molecule. Photo: Leyo

The free fluoride is then combined with calcium or sodium to form less harmful compounds, ultimately similar to the fluorination used in many toothpastes.

– Testing shows that more than 99% of the PFAS are destroyed in treated water, says Sharp.

Elise Thomas, head of the environmental program at Fairbanks Airport, is pleased with the results and believes the system will attract more facilities struggling with PFAS contamination.

– It gives us hope, says Thomas – and gives us something to look forward to.

PFAS, or Per- and polyfluoroalkyl substances, are synthetic chemicals found in products like food packaging and non-stick cookware. Often called "forever chemicals" due to their resistance to breakdown, they can contaminate drinking water, soil, and even appear in human tissues. Their presence raises health concerns, including hormonal disruption and an increased cancer risk. Eliminating or neutralizing them in contaminated areas has long been a challenge.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Imminent risk of grooming on “child-friendly” gaming platform Roblox

Published 21 April 2025
– By Editorial Staff
About 40 percent of Roblox users are estimated to be under 13 years old.

A new study by UK-based research firm Revealing Reality has highlighted serious safety concerns on the popular gaming platform Roblox, where children are at risk of exposure to sexual content and uncontrolled contact with adults.

The researchers describe the findings as “deeply disturbing” and point to a “troubling disconnect between Roblox’s child-friendly appearance and the reality”.

Researchers created several test accounts registered to fictional users aged 5, 9, 10, 13 and 40+. These interacted only with each other, without contact with external users, to map the safety of the platform. Despite Roblox’s recent updates, including parental controls, the study still revealed a number of serious flaws.

For example, the five-year-old’s account was able to communicate with adult users – and vice versa. This was despite Roblox claiming to have changed its settings to prevent this.

A 10-year-old’s account could easily enter environments with avatars in sexual positions and a virtual bathroom where users urinated and wore fetish accessories. Researchers also found that their test avatars heard sexual conversations between other players, as well as repeated slurping, kissing and grunting sounds when using the voice chat feature.

Roblox themselves claim that all voice chat – which is available to phone-verified accounts registered as belonging to users aged 13 and older AI-moderated in real-time. Despite that, adult users could easily ask for a five-year-old’s Snapchat details.

“An industry challenge”

Matt Kaufman, chief security officer at Roblox, defends himself in a statement, claiming that “trust and safety are at the core of everything we do” and that in 2024 the platform introduced “over 40 new safety enhancements”.

However, the company acknowledges that age verification for children under 13 “remains an industry challenge” and says it would like to see increased cooperation with various authorities.

In feedback collected by the UK’s The Guardian , several parents share other serious experiences, telling of their children being groomed by adult users or developing panic attacks after being forced into sexual content.

“Systematic failure to keep children safe”

Beeban Kidron, internet activist and member of the House of Lords, says the report shows a “systematic failure to keep children safe” and Damon De Ionno, head of research at Revealing Reality, criticizes Roblox’s new tools as inadequate.

– Children can still chat with strangers not on their friends list, and with 6 million experiences [on the platform], often with inaccurate descriptions and ratings, how can parents be expected to moderate?

Roblox, which describes itself as “the ultimate virtual universe”, has over 85 million daily users, of which around 40% are under 13. The platform has recently introduced restrictions on direct messaging to accounts under 13, but the study shows that significant risks remain.

The company encourages parents to use their own monitoring tools, while saying it is working to strengthen security. However, according to Kaufman, “industry-wide collaboration and government intervention” are needed to fully address the problems.

Exposing the lies that keep you trapped in surveillance culture

Debunking the biggest myths about data collection

Published 19 April 2025
– By Naomi Brockwell

Let’s be honest: data is useful. But we’re constantly told that in order to benefit from modern tech—and the insights that come with it—we have to give up our privacy. That useful data only comes from total access. That once your info is out there, you’ve lost control. That there’s no point in trying to protect it anymore.

These are myths. And they’re holding us back.

The truth is, you can benefit from data-driven tools without giving away everything. You can choose which companies to trust. You can protect one piece of information while sharing another. You can demand smarter systems that deliver insights without exploiting your identity.

Privacy isn’t about opting out of technology—it’s about choosing how you engage with it.

In this issue, we’re busting four of the most common myths about data collection. Because once you understand what’s possible, you’ll see how much power you still have.

Myth #1: “I gave data to one company, so my privacy is already gone”.

This one is everywhere. Once people sign up for a social media account or share info with a fitness app, they often throw up their hands and say, “Well, I guess my privacy’s already gone”.

But that’s not how privacy works.

Privacy is about choice. It’s about context. It’s about setting boundaries that make sense for you.

Just because you’ve shared data with one company doesn’t mean you’re giving blanket permission to every app, government agency, or ad network to track you forever.

You’re allowed to:

  • Share one piece of information and protect another.
  • Say yes to one service and no to others.
  • Change your mind, rotate your identifiers, and reduce future exposure.

Privacy isn’t all or nothing. And it’s never too late to take some power back.

Myth #2: “If I give a company data, they can do whatever they want with it”.

Not if you pick the right company.

Many businesses are committed to ethical data practices. Some explicitly state in their terms that they’ll never share your data, sell it, or use it outside the scope of the service you signed up for.

Look for platforms that don’t retain unnecessary data. There are more of them out there than you think.

Myth #3: “To get insights, a company needs to see my data”.

This one’s finally starting to crumble—thanks to game-changing tech like homomorphic encryption.

Yes, really: companies can now do compute on encrypted data without ever decrypting it.

It’s already in use in financial services, research, and increasingly, consumer apps. It proves that privacy and data analysis can go hand in hand.

Imagine this: a health app computes your sleep averages, detects issues, and offers recommendations—without ever seeing your raw data. It stays encrypted the whole time.

We need to champion this kind of innovation. More research. More tools. More adoption. And more support for companies already doing it—because our business sends a signal that this investment was worth it for them, and encourages other companies to jump on board.

Myth #4: “To prove who you are, you have to hand over sensitive data.”

You’ve heard this from banks, employers, and government forms: “We need your full ID to verify who you are”.

But here’s the problem: every time we hand over sensitive data, we increase our exposure to breaches and identity theft. It’s a bad system.

There’s a better way.
With zero-knowledge proofs, we can prove things like being over 18, or matching a record—without revealing our address, birthdate, or ID number.

The tech already exists. But companies and institutions are slow to adopt it or even recognize it as legitimate. This won’t change until we demand better.

Let’s push for a world where:

  • Our identity isn’t a honeypot for hackers.
  • We can verify ourselves without becoming vulnerable.
  • Privacy-first systems are the norm—not the exception.

Takeaways

The idea that we have to trade privacy for progress is a myth. You can have both. The tools exist. The choice is ours.

Privacy isn’t about hiding—it’s about control. You can choose to share specific data without giving up your rights or exposing everything.

Keep these in mind:

  • Pick tools that respect you. Look for platforms with strong privacy practices and transparent terms.
  • Use privacy-preserving tech. Homomorphic encryption and zero-knowledge proofs are real—and growing.
  • Don’t give up just because you shared once. Privacy is a spectrum. You can always take back control.
  • Talk about it. The more people realize they have options, the faster we change the norm.

Being informed doesn’t have to mean being exploited.
Let’s demand better.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

NATO implements AI system for military operations

The future of AI

Published 17 April 2025
– By Editorial Staff
Modern warfare increasingly resembles what only a few years ago was science fiction.

The military pact NATO has entered into an agreement with the American tech company Palantir to introduce the AI-powered system Maven Smart System (MSS) in its military operations.

The Nordic Times has previously highlighted Palantir’s founder Peter Thiel and his influence over the circle around Trump, and how the company’s AI technology has been used to develop drones that can identify Russians and automate killing.

NATO announced on April 14 that it has signed a contract with Palantir Technologies to implement the Maven Smart System (MSS NATO), within the framework of Allied Command Operations, reports DefenceScoop.

MSS NATO uses generative AI and machine learning to quickly process information, and the system is designed to provide a sharper situational awareness by analyzing large amounts of data in real time.

This ranges from satellite imagery to intelligence reports, which are then used to identify targets and plan operations.

Terminator
In the “Terminator” movies, the remnants of the Earth’s population fight against the AI-controlled Skynet weapon system.

Modernizing warfare

According to the NATO Communications Agency NCIA, the aim is to modernize warfare capabilities. What used to require hundreds of intelligence analysts can now, with the help of MSS, be handled by a small group of 20-50 soldiers, according to the NCIA.

Palantir has previously supplied similar technology to the US Army, Air Force and Space Force. In September 2024, the company also signed a $100 million contract with the US military to expand the use of AI in targeting.

The system is expected to be operational as early as mid-May 2025.

The new deal has also caused financial markets to react and Palantir’s stock has risen. The company has also generally seen strong growth in recent years, with revenues increasing by 50% between 2022 and 2024.

Criticism and concerns

Palantir has previously been criticized for its cooperation with the Israeli Defense Forces, which led a major Nordic investor to cancel its involvement in the company. Criticisms include the risk of AI technology being used in ways that could violate human rights, especially in conflict zones.

On social media, the news has provoked mixed reactions. Mario Nawfal, a well-known voice on platform X, wrote in a post that “NATO goes full Skynet”, …referring to the fictional AI system in the Terminator movies, where technology takes control of the world.

Several critics express concerns about the implications of technology, while others see it as a necessary step to counter modern threats.

NATO and Palantir emphasize that technology does not replace human decision-making. They emphasize that the system is designed to support military leaders and not to act independently.

Nevertheless, there is a growing debate and concern about how AI’s role in warfare could affect future conflicts and global security. Some analysts also see the use of US technologies such as MSS as a way for NATO to strengthen ties across the Atlantic.

OpenAI may develop AI weapons for the Pentagon

The future of AI

Published 14 April 2025
– By Editorial Staff
Sam Altman's OpenAI is already working with defense technology company Anduril Industries.

OpenAI CEO Sam Altman, does not rule out that his and his company will help the Pentagon develop new AI-based weapon systems in the future.

– I will never say never, because the world could get really weird, the tech billionaire cryptically states.

The statement came during Thursday’s Vanderbilt Summit on Modern Conflict and Emerging Threat, and Altman added that he does not believe he will be working on developing weapons systems for the US military “in the foreseeable future” – unless it is deemed the best of several bad options.

– I don’t think most of the world wants AI making weapons decisions, he continued.

The fact that companies developing consumer technology are also developing military weapons has long been highly controversial – and in 2018, for example, led to widespread protests within Google’s own workforce, with many also choosing to leave voluntarily or being forced out by company management.

Believes in “exceptionally smart” systems before year-end

However, the AI industry in particular has shown a much greater willingness to enter into such agreements, and OpenAI has revised its policy on work related to “national security” in the past year. Among other things, it has publicly announced a partnership with defense technology company Anduril Industries Inc to develop anti-drone technology.

Altman also stressed the need for the US government to increase its expertise in AI.

– I don’t think AI adoption in the government has been as robust as possible, he said, adding that there will be “exceptionally smart” AI systems in operation ready before the end of the year.

Altman and Nakasone a retired four-star general attended the event ahead of the launch of OpenAI’s upcoming AI model, which is scheduled to be released next week. The audience included hundreds of representatives from intelligence agencies, the military and academia.