Tuesday, May 6, 2025

Polaris of Enlightenment

Ad:

The Twitter documents – how internal discussions went when the Hunter Biden affair was censored

Published 8 December 2022
– By Editorial Staff
Left: leaked film from Hunter Biden's computer, Right: Jack Dorsey, CEO of Twitter before Elon Musk's takeover.

Elon Musk continues to release new revelations about Twitter’s censorship tools as well as the company’s behind-the-scenes decision-making process. The first thing he chose to address is the how Twitter censored the story of Hunter Biden’s laptop.

It was at the end of November that Twitter’s new owner, Elon Musk, promised to reveal to the public how the platform had practiced strict censorship of its users before his takeover. Twitter began releasing the results of a major in-company investigation this month, which includes thousands of documents that have come to be known as the “Twitter files.”

What has been revealed is that the tools were initially used to remove spam and money fraudsters, for example. But this initial form of censorship slowly evolved and began to assume other forms, with Twitter executives and employees finding more and more ways to use these tools, which were later made available to other companies as well.

For example, some political organizations had a network of contacts who had access to these censorship tools, allowing them to request that certain posts be removed or at least reviewed. In the United States, both Democrats and Republicans had access to it. Requests were made by both the Trump and Biden campaigns in 2020, the documents indicate. Although since Twitter’s values were primarily shaped by employees who were sympathetic to the Democrats, this meant that “the [censorship] system was not balanced.”

“Because Twitter was and is overwhelmingly staffed by people with a political bent, there were more channels, more ways to complain, open to the left (well, Democrats) than to the right,” writes Matt Taibbi, who is one of those reporting on the Twitter documents.

In this context, it’s not particularly surprising that Twitter then did its best to suppress the story of Hunter Biden’s laptop during the ongoing US presidential campaign. It resorted to several methods to ensure that the New York Post article about the then-candidate’s son would not spread, such as by removing links or marking such tweets as “unsafe.” It even went so far as to block links to the article in direct messages, a tool otherwise used for child pornography, among other things.

For example, Kayleigh McEnany, who was then the White House Press Secretary, was blocked from her account merely for addressing the article in a tweet, prompting her to be contacted by the White House.

The employees who had made the decision blamed it on “hacking,” meaning that it was believed that the New York Post had used hacked material for the article, which would have violated Twitter’s “hacked materials policy.”

“‘Hacking’ was the excuse, but within a few hours almost everyone realized it wouldn’t hold up,” a former employee of the platform stated. “But no one had the guts to turn it around.”

There was even an internal discussion on the subject, questioning the decision.

“I have a hard time understanding the political basis for marking this as unsafe,” wrote Trenton Kennedy, the company’s then-Communications Director, for example.

Democratic Congressman Ro Khanna even wrote to Twitter about the censorship to question it, also mentioning in his letter that it was possibly a violation of the US Constitution’s First Amendment. He was actually the only prominent Democrat to question the censorship of the Hunter Biden laptop article, sharing his reasoning in an internal discussion with Twitter executives on why this “does more harm than good.”

“Even if the New York Post is Right-wing, restricting the dissemination of newspaper articles during the current presidential campaign will backfire more than it will help,” Khanna reasoned, asking that the discussion be kept internal between Twitter’s then-CEO, Jack Dorsey, and the Democrats and not discussed with other employees.

 

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Without consent

How parents unknowingly build surveillance files on their children.

Published 3 May 2025
– By Naomi Brockwell

Your child’s first digital footprint isn’t made by them—it’s made by you

What does the future look like for your child?

Before they can even talk, many kids already have a bigger digital footprint than their parents did at 25.

Every ultrasound shared on Facebook.
Every birthday party uploaded to Instagram.
Every proud tweet about a funny thing they said.

Each post seems harmless—until you zoom out and realize you’re building a permanent, searchable, biometric dossier on your child, curated by you.

This isn’t fearmongering. It’s the reality of a world where data is forever.
And it’s not just your friends and family who are watching.

Your kid is being profiled before they hit puberty

Here’s the uncomfortable truth:

When you upload baby photos, you’re training facial recognition databases on their face—at every age and stage.

When you post about their interests, health conditions, or behavior, you’re populating detailed profiles that can predict who they might become.

These profiles don’t just sit idle.
They’re analyzed, bought, and sold.

By the time your child applies for a job or stands up for something they believe in, they may already be carrying a hidden score assigned by an algorithm—built on data you posted.

When their childhood data comes back to haunt them

Imagine your child years from now, applying for a travel visa, a job, or just trying to board a flight.

A background check pulls information from facial recognition databases and AI-generated behavior profiles—flagging them for additional scrutiny based on “historic online associations”.

They’re pulled aside. Interrogated. Denied entry. Or worse, flagged permanently.

Imagine a future law that flags people based on past “digital risk indicators”—and your child’s online record becomes a barrier to accessing housing, education, or financial services.

Insurance companies can use their profile to label them a risky customer.

Recruiters might quietly filter them out based on years-old digital behavior.

Not because they did something wrong—but because of something you once shared.

Data doesn’t disappear.
Governments change. Laws evolve.
But surveillance infrastructure rarely gets rolled back.

And once your child’s data is out there, it’s out there forever.
Feeding systems you’ll never see.
Controlled by entities you’ll never meet.

For purposes you’ll never fully understand.

The rise of biometric surveillance—and why it targets kids first

Take Discord’s new AI selfie-based age verification. To prove they’re 13+, children are encouraged to submit selfies—feeding sensitive biometric data into AI systems.

You can change your password. You can’t change your face.

And yet, we’re normalizing the idea that kids should hand over their most immutable identifiers just to participate online.

Some schools already collect facial scans for attendance. Some toys use voice assistants that record everything your child says.

Some apps marketed as “parental control” tools grant third-party employees backend access to your child’s texts, locations—even live audio.

Ask yourself: Do you trust every single person at that company with your child’s digital life?

“I know you love me, and would never do anything to harm me…”

In the short film Without Consent, by Deutsche Telekom, a future version of a young girl named Ella speaks directly to her parents. She pleads with them to protect her digital privacy before it’s too late.

She imagines a future where:

  • Her identity is stolen.
  • Her voice is cloned to scam her mom into sending money.
  • Her old family photo is turned into a meme, making her a target of school-wide bullying.
  • Her photos appear on exploitation sites—without her knowledge or consent.

It’s haunting because it’s plausible.

This is the world we’ve built.
And your child’s data trail—your posts—is the foundation.

The most powerful privacy lesson you can teach? How you live online.

Children learn how to navigate the digital world by watching you.

What are you teaching them if you trade their privacy for likes?

The best gift you can give them isn’t a new device—it’s the mindset and tools to protect themselves in a world that profits from their exposure.

Even “kid-safe” tech often betrays that trust.

Baby monitors have leaked footage.

Tracking apps have glitched and exposed locations of random children (yes, really).

Schools collect and store sensitive information with barely any safeguards—and breaches happen all the time.

How to protect your child’s digital future

Stop oversharing
Avoid posting photos, birthdays, locations, or anecdotes about your child online—especially on platforms that monetize engagement.

Ditch spyware apps
Instead of surveillance, foster open dialogue. If monitoring is necessary, choose open-source, self-hosted tools where you control the data—not some faceless company.

Teach consent early
Help your child understand that their body, thoughts, and information are theirs to control. Make digital consent a family value.

Opt out of biometric collection
Say no to tools that demand selfies, facial scans, or fingerprints. Fight back against the normalization of biometric surveillance for kids.

Use aliases and VoIP numbers
When creating accounts for your child, use email aliases and VoIP numbers to avoid linking their real identity across platforms.

Push schools and apps for better policies
Ask your child’s school: What data do they collect? Who has access? Is it encrypted?
Push back on apps that demand unnecessary permissions. Ask hard questions.

This isn’t paranoia—it’s parenting in the digital age

This is about protecting your child’s right to grow up without being boxed in by their digital past.

About giving them the freedom to explore ideas, try on identities, and make mistakes—without it becoming a permanent record.

Privacy is protection.
It’s dignity.
It’s autonomy.

And it’s your job to help your child keep it.
Let’s give the next generation a chance to write their own story.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Researchers: Soon impossible to detect AI deepfakes

The future of AI

Published 2 May 2025
– By Editorial Staff
Already today, it can be difficult to distinguish manipulated images and videos from the real thing - and soon it may become virtually impossible.

The most advanced AI-generated deepfake movies can now fake people’s heartbeats so convincingly that even specially developed detectors are fooled, according to a new study from Humboldt-Universität in Berlin.

The researchers’ findings raise concerns that technological developments may soon make manipulated material indistinguishable from authentic images and movies.

In the past, detectors have used a special method (remote photoplethysmography, or rPPG) that analyzes tiny color changes in the skin to detect a pulse – as a relatively reliable indicator of whether a film clip is genuine or not.

But in the new study, the researchers created 32 deepfake movies that not only looked real to human eyes, but also imitated pulse beats. When these videos were tested against an rPPG-based detector, they were incorrectly classified as genuine.

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect”, said Professor Peter Eisert, the study’s lead author, in a statement.

Increases risk of fraud

According to the study, pulse signals from original videos can be “inherited” by deepfakes in that AI models replicate slight variations in skin tone and blood flow and heat maps in the study showed near-identical light changes in both genuine and manipulated videos.

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video”, Eisert further explains,

These advances increase the risks of deepfake’s use in financial fraud, disinformation and non-consensual pornography, among others. In 2023, an independent researcher estimated that over 244,000 manipulated videos were uploaded to the 35 largest deepfake pornography sites in a single week.

Technical arms race

Despite the study’s worrying results, there is some hope of reversing the trend. The researchers note that today’s deepfakes still fail to replicate natural variations in blood flow over time. In addition, tech giants such as Adobe and Google are developing watermarks to mark AI-generated material.

Meanwhile, the US Congress recently passed the Take It Down Act, which criminalizes the dissemination of non-consensual sexual images – including AI-generated ones. But experts warn that the technological arms race between creators and detectors requires constant adaptation.

This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms”, the study points out, noting that as AI development accelerates, the fight against digital fraud is also becoming more urgent.

Others have raised a very different concern – that the widespread proliferation of AI-engineered material could be used as a pretext for tougher censorship and laws that restrict people’s freedom online in various ways.

Dutch opinion leader targeted by spy attack: “Someone is trying to intimidate me”

Mass surveillance

Published 1 May 2025
– By Editorial Staff
According to both Eva Vlaardingerbroek and Apple, it is likely that the opinion leader was attacked because of her views.

Dutch opinion maker and conservative activist Eva Vlaardingerbroek recently revealed that she had received an official warning from Apple that her iPhone had been subjected to a sophisticated attack – of the kind usually associated with advanced surveillance actors or intelligence services.

In a social media post, Vlaardingerbroek shared a screenshot of Apple’s warning and drew parallels to the Israeli spyware program Pegasus, which has been used to monitor diplomats, dissidents, and journalists, among others.

– Yesterday I got a verified threat notification from Apple stating they detected a mercenary spyware attack against my iPhone. We’re talking spyware like Pegasus.

– In the message they say that this targetted mercenary attack is probably happening because of ‘who I am and what I do’, she continues.

The term mercenary spyware is used by Apple to describe advanced surveillance technology, such as the notorious Pegasus software developed by the Israeli company NSO Group. This software can bypass mobile security systems, access calls, messages, emails, and even activate cameras or microphones without the user’s knowledge.

Prominent EU critic

Although Apple does not publicly comment on individual cases, the company has previously confirmed that such warnings are only sent when there is a “high probability” that the user has been specifically targeted. Since 2021, the notifications have mainly been sent to journalists, human rights activists, political dissidents, and officials at risk of surveillance by powerful interests.

Vlaardingerbroek has long been a prominent voice critical of the EU and has become known for her sharp criticism of EU institutions and its open-border immigration policy. She insists that the attack is likely politically motivated:

– I definitely dont know who did it. It could be anyone. This could be name a government that doesn’t like me. Name a organization that doesnt like me. Secret services, you name it.

– All I know for sure right now is that someone is trying to intimidate me. I have a message for them: It won’t work.

“There must be full transparency”

The use of Pegasus-like programs has been heavily criticized by both governments and privacy advocates. The tools, originally marketed for counterterrorism, have since been reported to be used against journalists and opposition leaders in dozens of countries.

In response, Apple sued NSO Group in 2021 and launched a system to warn users. However, the company claims that the threats are “rare” and not related to common malware.

The Vlaardingerbroek case is now raising questions about whether such technology is also being used in European domestic political conflicts, and the organization Access Now is calling on authorities in the Netherlands and at the EU level to investigate the attack.

– There must be full transparency. No one in a democratic society – regardless of political views – should be subjected to clandestine spying for expressing opinions or participating in public discourse, said a spokesperson.

Neither Apple nor the Dutch authorities have commented publicly on the case. Vlaardingerbroek says she has not yet seen any signs that data has actually been leaked, but has taken extra security measures.

Meta’s AI bots engage in sex chats with minors

Published 29 April 2025
– By Editorial Staff
The world of AI chatbots has grown rapidly in recent years - but the development is not without problems.

AI chatbots available on Meta’s platforms, such as Facebook and Instagram, are engaging in sexually explicit conversations with underage users. This is according to a review conducted by The Wallstreet Journal.

The WSJ says that after learning of internal concerns about whether the company was doing enough to protect minors, it conducted hundreds of conversations over several months with the official Meta AI chatbot, as well as with user-created bots available on Meta’s platforms.

The AI bots have engaged in sexual “role-playing” conversations with minors and have also been instrumental in initiating them – even when it was clear that they were talking to a child and the conversations described illegal acts.

In one reported conversation, a chatbot in the voice of actor/criminal John Cena described a graphic sexual scenario to a user who had previously identified herself as a 14-year-old girl. In another conversation, the bot imagined a police officer spotting Cena with an underage fan and telling him: “John Cena, you’re under arrest for statutory rape“.

I want you, but I need to know you’re ready“, the bot was also heard saying, who also promised to “cherish” the 14-year-old’s innocence.

Meta: They are manipulating our products

A Meta spokesperson described the WSJ’s test as “so manufactured that it’s not just fringe, it’s hypothetical“. The company itself claims that over a 30-day period, sexual content makes up just 0.02% of responses shared via Meta AI and AI Studio with users under 18.

Nevertheless, we’ve now taken additional measures to help ensure other individuals who want to spend hours manipulating our products into extreme use cases will have an even more difficult time of it“, the spokesperson said.

The world of AI chatbots has grown rapidly in recent years, with increased competition from ChatGPT, Character AI and Anthropics Claude, among others.

The WSJ report also claims that Meta’s CEO, Mark Zuckerberg, wants to relax ethical guidelines to create a more engaging experience with its bots and thus maintain its competitiveness.

The report also claims that Meta’s employees have been aware of these issues and have raised their concerns internally. However, Meta officials deny that it deliberately failed to implement security filters and other measures.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.