Wednesday, May 7, 2025

Polaris of Enlightenment

Ad:

Elon Musk to create ChatGPT competitor

Published 21 April 2023
– By Editorial Staff
Elon Musk claims he wants to create a "truth-seeking" AI service.

Entrepreneur and multi-billionaire Elon Musk announces plans to develop a better alternative to the ChatGPT chatbot, accusing the Microsoft-funded AI service of being programmed by “left-wing experts” who “taught the chatbot to lie”.

Musk, who owns Tesla and Twitter, accused Google CEO Larry Page in an interview with Fox News’ Tucker Carlson of not caring or understanding how AI security works.

I’m going to start something that I call TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe.

I think this may be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humanity because we are an interesting part of the universe, he continues.

 

According to Reuters news agency sources, Musk has already hired several of Google’s AI researchers to work on developing the service.

It can also be noted that Elon Musk has previously warned about the risks of AI and still believes that AI has the “potential” to destroy human civilization.

Musk was also one of the co-founders of Open AI, the company behind Chat GPT – but stepped down from the board in 2018 because he needed to focus on his work with Tesla and Space X instead.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

PoX: New memory chip from China sets speed record

Published today 8:15
– By Editorial Staff
Fudan engineers are now working on scaling up the technology and developing new prototypes.

A research team at Fudan University in China has developed the fastest semiconductor memory reported to date. The new memory, called PoX, is a type of non-volatile flash memory that can write a single bit in just 400 picoseconds – equivalent to about 25 billion operations per second.

The results were recently published in the scientific journal Nature and unlike traditional RAM (such as SRAM and DRAM), which is fast but erases data in the event of a power outage, non-volatile memory such as flash retains stored information without power. The problem has been that these memories are significantly slower – often thousands of times – which is a bottleneck for today’s AI systems that handle huge amounts of data in real time.

The research team, led by Professor Zhou Peng, achieved the breakthrough by replacing silicon channels with two-dimensional Dirac graphene – a material that allows extremely fast charge transfer. By fine-tuning the so-called “Gaussian length” of the channel, the researchers were able to create a phenomenon they call two-dimensional superinjection, which allows effectively unlimited charge transfer to the memory storage.

Using AI‑driven process optimization, we drove non‑volatile memory to its theoretical limit. This paves the way for future high‑speed flash memory, Zhou told the Chinese news agency Xinhua.

“Opens up new applications”

Co-author Liu Chunsen compares the difference to going from a USB flash drive that can do 1,000 writes per second to a chip that does a billion – in the same amount of time.

The technology combines low power consumption with extreme speed and could be particularly valuable for AI in battery-powered devices and systems with limited power supplies. If PoX can be mass-produced, it could reduce the need for separate caches, cut energy use and enable instant start-up of computers and mobiles.

Fudan engineers are now working on scaling up the technology and developing prototypes. No commercial partnerships have yet been announced.

– Our breakthrough can reshape storage technology, drive industrial upgrades and open new application scenarios, Zhou asserts.

Without consent

How parents unknowingly build surveillance files on their children.

Published 3 May 2025
– By Naomi Brockwell

Your child’s first digital footprint isn’t made by them—it’s made by you

What does the future look like for your child?

Before they can even talk, many kids already have a bigger digital footprint than their parents did at 25.

Every ultrasound shared on Facebook.
Every birthday party uploaded to Instagram.
Every proud tweet about a funny thing they said.

Each post seems harmless—until you zoom out and realize you’re building a permanent, searchable, biometric dossier on your child, curated by you.

This isn’t fearmongering. It’s the reality of a world where data is forever.
And it’s not just your friends and family who are watching.

Your kid is being profiled before they hit puberty

Here’s the uncomfortable truth:

When you upload baby photos, you’re training facial recognition databases on their face—at every age and stage.

When you post about their interests, health conditions, or behavior, you’re populating detailed profiles that can predict who they might become.

These profiles don’t just sit idle.
They’re analyzed, bought, and sold.

By the time your child applies for a job or stands up for something they believe in, they may already be carrying a hidden score assigned by an algorithm—built on data you posted.

When their childhood data comes back to haunt them

Imagine your child years from now, applying for a travel visa, a job, or just trying to board a flight.

A background check pulls information from facial recognition databases and AI-generated behavior profiles—flagging them for additional scrutiny based on “historic online associations”.

They’re pulled aside. Interrogated. Denied entry. Or worse, flagged permanently.

Imagine a future law that flags people based on past “digital risk indicators”—and your child’s online record becomes a barrier to accessing housing, education, or financial services.

Insurance companies can use their profile to label them a risky customer.

Recruiters might quietly filter them out based on years-old digital behavior.

Not because they did something wrong—but because of something you once shared.

Data doesn’t disappear.
Governments change. Laws evolve.
But surveillance infrastructure rarely gets rolled back.

And once your child’s data is out there, it’s out there forever.
Feeding systems you’ll never see.
Controlled by entities you’ll never meet.

For purposes you’ll never fully understand.

The rise of biometric surveillance—and why it targets kids first

Take Discord’s new AI selfie-based age verification. To prove they’re 13+, children are encouraged to submit selfies—feeding sensitive biometric data into AI systems.

You can change your password. You can’t change your face.

And yet, we’re normalizing the idea that kids should hand over their most immutable identifiers just to participate online.

Some schools already collect facial scans for attendance. Some toys use voice assistants that record everything your child says.

Some apps marketed as “parental control” tools grant third-party employees backend access to your child’s texts, locations—even live audio.

Ask yourself: Do you trust every single person at that company with your child’s digital life?

“I know you love me, and would never do anything to harm me…”

In the short film Without Consent, by Deutsche Telekom, a future version of a young girl named Ella speaks directly to her parents. She pleads with them to protect her digital privacy before it’s too late.

She imagines a future where:

  • Her identity is stolen.
  • Her voice is cloned to scam her mom into sending money.
  • Her old family photo is turned into a meme, making her a target of school-wide bullying.
  • Her photos appear on exploitation sites—without her knowledge or consent.

It’s haunting because it’s plausible.

This is the world we’ve built.
And your child’s data trail—your posts—is the foundation.

The most powerful privacy lesson you can teach? How you live online.

Children learn how to navigate the digital world by watching you.

What are you teaching them if you trade their privacy for likes?

The best gift you can give them isn’t a new device—it’s the mindset and tools to protect themselves in a world that profits from their exposure.

Even “kid-safe” tech often betrays that trust.

Baby monitors have leaked footage.

Tracking apps have glitched and exposed locations of random children (yes, really).

Schools collect and store sensitive information with barely any safeguards—and breaches happen all the time.

How to protect your child’s digital future

Stop oversharing
Avoid posting photos, birthdays, locations, or anecdotes about your child online—especially on platforms that monetize engagement.

Ditch spyware apps
Instead of surveillance, foster open dialogue. If monitoring is necessary, choose open-source, self-hosted tools where you control the data—not some faceless company.

Teach consent early
Help your child understand that their body, thoughts, and information are theirs to control. Make digital consent a family value.

Opt out of biometric collection
Say no to tools that demand selfies, facial scans, or fingerprints. Fight back against the normalization of biometric surveillance for kids.

Use aliases and VoIP numbers
When creating accounts for your child, use email aliases and VoIP numbers to avoid linking their real identity across platforms.

Push schools and apps for better policies
Ask your child’s school: What data do they collect? Who has access? Is it encrypted?
Push back on apps that demand unnecessary permissions. Ask hard questions.

This isn’t paranoia—it’s parenting in the digital age

This is about protecting your child’s right to grow up without being boxed in by their digital past.

About giving them the freedom to explore ideas, try on identities, and make mistakes—without it becoming a permanent record.

Privacy is protection.
It’s dignity.
It’s autonomy.

And it’s your job to help your child keep it.
Let’s give the next generation a chance to write their own story.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Researchers: Soon impossible to detect AI deepfakes

The future of AI

Published 2 May 2025
– By Editorial Staff
Already today, it can be difficult to distinguish manipulated images and videos from the real thing - and soon it may become virtually impossible.

The most advanced AI-generated deepfake movies can now fake people’s heartbeats so convincingly that even specially developed detectors are fooled, according to a new study from Humboldt-Universität in Berlin.

The researchers’ findings raise concerns that technological developments may soon make manipulated material indistinguishable from authentic images and movies.

In the past, detectors have used a special method (remote photoplethysmography, or rPPG) that analyzes tiny color changes in the skin to detect a pulse – as a relatively reliable indicator of whether a film clip is genuine or not.

But in the new study, the researchers created 32 deepfake movies that not only looked real to human eyes, but also imitated pulse beats. When these videos were tested against an rPPG-based detector, they were incorrectly classified as genuine.

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect”, said Professor Peter Eisert, the study’s lead author, in a statement.

Increases risk of fraud

According to the study, pulse signals from original videos can be “inherited” by deepfakes in that AI models replicate slight variations in skin tone and blood flow and heat maps in the study showed near-identical light changes in both genuine and manipulated videos.

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video”, Eisert further explains,

These advances increase the risks of deepfake’s use in financial fraud, disinformation and non-consensual pornography, among others. In 2023, an independent researcher estimated that over 244,000 manipulated videos were uploaded to the 35 largest deepfake pornography sites in a single week.

Technical arms race

Despite the study’s worrying results, there is some hope of reversing the trend. The researchers note that today’s deepfakes still fail to replicate natural variations in blood flow over time. In addition, tech giants such as Adobe and Google are developing watermarks to mark AI-generated material.

Meanwhile, the US Congress recently passed the Take It Down Act, which criminalizes the dissemination of non-consensual sexual images – including AI-generated ones. But experts warn that the technological arms race between creators and detectors requires constant adaptation.

This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms”, the study points out, noting that as AI development accelerates, the fight against digital fraud is also becoming more urgent.

Others have raised a very different concern – that the widespread proliferation of AI-engineered material could be used as a pretext for tougher censorship and laws that restrict people’s freedom online in various ways.

Dutch opinion leader targeted by spy attack: “Someone is trying to intimidate me”

Mass surveillance

Published 1 May 2025
– By Editorial Staff
According to both Eva Vlaardingerbroek and Apple, it is likely that the opinion leader was attacked because of her views.

Dutch opinion maker and conservative activist Eva Vlaardingerbroek recently revealed that she had received an official warning from Apple that her iPhone had been subjected to a sophisticated attack – of the kind usually associated with advanced surveillance actors or intelligence services.

In a social media post, Vlaardingerbroek shared a screenshot of Apple’s warning and drew parallels to the Israeli spyware program Pegasus, which has been used to monitor diplomats, dissidents, and journalists, among others.

– Yesterday I got a verified threat notification from Apple stating they detected a mercenary spyware attack against my iPhone. We’re talking spyware like Pegasus.

– In the message they say that this targetted mercenary attack is probably happening because of ‘who I am and what I do’, she continues.

The term mercenary spyware is used by Apple to describe advanced surveillance technology, such as the notorious Pegasus software developed by the Israeli company NSO Group. This software can bypass mobile security systems, access calls, messages, emails, and even activate cameras or microphones without the user’s knowledge.

Prominent EU critic

Although Apple does not publicly comment on individual cases, the company has previously confirmed that such warnings are only sent when there is a “high probability” that the user has been specifically targeted. Since 2021, the notifications have mainly been sent to journalists, human rights activists, political dissidents, and officials at risk of surveillance by powerful interests.

Vlaardingerbroek has long been a prominent voice critical of the EU and has become known for her sharp criticism of EU institutions and its open-border immigration policy. She insists that the attack is likely politically motivated:

– I definitely dont know who did it. It could be anyone. This could be name a government that doesn’t like me. Name a organization that doesnt like me. Secret services, you name it.

– All I know for sure right now is that someone is trying to intimidate me. I have a message for them: It won’t work.

“There must be full transparency”

The use of Pegasus-like programs has been heavily criticized by both governments and privacy advocates. The tools, originally marketed for counterterrorism, have since been reported to be used against journalists and opposition leaders in dozens of countries.

In response, Apple sued NSO Group in 2021 and launched a system to warn users. However, the company claims that the threats are “rare” and not related to common malware.

The Vlaardingerbroek case is now raising questions about whether such technology is also being used in European domestic political conflicts, and the organization Access Now is calling on authorities in the Netherlands and at the EU level to investigate the attack.

– There must be full transparency. No one in a democratic society – regardless of political views – should be subjected to clandestine spying for expressing opinions or participating in public discourse, said a spokesperson.

Neither Apple nor the Dutch authorities have commented publicly on the case. Vlaardingerbroek says she has not yet seen any signs that data has actually been leaked, but has taken extra security measures.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.