Wednesday, July 30, 2025

Polaris of Enlightenment

Without consent

How parents unknowingly build surveillance files on their children.

Published 3 May 2025
– By Naomi Brockwell
4 minute read

Your child’s first digital footprint isn’t made by them—it’s made by you

What does the future look like for your child?

Before they can even talk, many kids already have a bigger digital footprint than their parents did at 25.

Every ultrasound shared on Facebook.
Every birthday party uploaded to Instagram.
Every proud tweet about a funny thing they said.

Each post seems harmless—until you zoom out and realize you’re building a permanent, searchable, biometric dossier on your child, curated by you.

This isn’t fearmongering. It’s the reality of a world where data is forever.
And it’s not just your friends and family who are watching.

Your kid is being profiled before they hit puberty

Here’s the uncomfortable truth:

When you upload baby photos, you’re training facial recognition databases on their face—at every age and stage.

When you post about their interests, health conditions, or behavior, you’re populating detailed profiles that can predict who they might become.

These profiles don’t just sit idle.
They’re analyzed, bought, and sold.

By the time your child applies for a job or stands up for something they believe in, they may already be carrying a hidden score assigned by an algorithm—built on data you posted.

When their childhood data comes back to haunt them

Imagine your child years from now, applying for a travel visa, a job, or just trying to board a flight.

A background check pulls information from facial recognition databases and AI-generated behavior profiles—flagging them for additional scrutiny based on “historic online associations”.

They’re pulled aside. Interrogated. Denied entry. Or worse, flagged permanently.

Imagine a future law that flags people based on past “digital risk indicators”—and your child’s online record becomes a barrier to accessing housing, education, or financial services.

Insurance companies can use their profile to label them a risky customer.

Recruiters might quietly filter them out based on years-old digital behavior.

Not because they did something wrong—but because of something you once shared.

Data doesn’t disappear.
Governments change. Laws evolve.
But surveillance infrastructure rarely gets rolled back.

And once your child’s data is out there, it’s out there forever.
Feeding systems you’ll never see.
Controlled by entities you’ll never meet.

For purposes you’ll never fully understand.

The rise of biometric surveillance—and why it targets kids first

Take Discord’s new AI selfie-based age verification. To prove they’re 13+, children are encouraged to submit selfies—feeding sensitive biometric data into AI systems.

You can change your password. You can’t change your face.

And yet, we’re normalizing the idea that kids should hand over their most immutable identifiers just to participate online.

Some schools already collect facial scans for attendance. Some toys use voice assistants that record everything your child says.

Some apps marketed as “parental control” tools grant third-party employees backend access to your child’s texts, locations—even live audio.

Ask yourself: Do you trust every single person at that company with your child’s digital life?

“I know you love me, and would never do anything to harm me…”

In the short film Without Consent, by Deutsche Telekom, a future version of a young girl named Ella speaks directly to her parents. She pleads with them to protect her digital privacy before it’s too late.

She imagines a future where:

  • Her identity is stolen.
  • Her voice is cloned to scam her mom into sending money.
  • Her old family photo is turned into a meme, making her a target of school-wide bullying.
  • Her photos appear on exploitation sites—without her knowledge or consent.

It’s haunting because it’s plausible.

This is the world we’ve built.
And your child’s data trail—your posts—is the foundation.

The most powerful privacy lesson you can teach? How you live online.

Children learn how to navigate the digital world by watching you.

What are you teaching them if you trade their privacy for likes?

The best gift you can give them isn’t a new device—it’s the mindset and tools to protect themselves in a world that profits from their exposure.

Even “kid-safe” tech often betrays that trust.

Baby monitors have leaked footage.

Tracking apps have glitched and exposed locations of random children (yes, really).

Schools collect and store sensitive information with barely any safeguards—and breaches happen all the time.

How to protect your child’s digital future

Stop oversharing
Avoid posting photos, birthdays, locations, or anecdotes about your child online—especially on platforms that monetize engagement.

Ditch spyware apps
Instead of surveillance, foster open dialogue. If monitoring is necessary, choose open-source, self-hosted tools where you control the data—not some faceless company.

Teach consent early
Help your child understand that their body, thoughts, and information are theirs to control. Make digital consent a family value.

Opt out of biometric collection
Say no to tools that demand selfies, facial scans, or fingerprints. Fight back against the normalization of biometric surveillance for kids.

Use aliases and VoIP numbers
When creating accounts for your child, use email aliases and VoIP numbers to avoid linking their real identity across platforms.

Push schools and apps for better policies
Ask your child’s school: What data do they collect? Who has access? Is it encrypted?
Push back on apps that demand unnecessary permissions. Ask hard questions.

This isn’t paranoia—it’s parenting in the digital age

This is about protecting your child’s right to grow up without being boxed in by their digital past.

About giving them the freedom to explore ideas, try on identities, and make mistakes—without it becoming a permanent record.

Privacy is protection.
It’s dignity.
It’s autonomy.

And it’s your job to help your child keep it.
Let’s give the next generation a chance to write their own story.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Women’s app hacked – thousands of private images leaked

Published yesterday 12:55
– By Editorial Staff
1 minute read

An app that helps women identify problematic men became a target for hackers. Over 70,000 images, including selfies and driver’s licenses, were leaked to 4chan.

The dating app Tea, which allows women to warn each other about “red flags” in men, suffered a major data breach last week. According to 404 Media, hackers from the 4chan forum managed to access 72,000 images from the app’s database, of which 13,000 were selfies and driver’s license photos.

The app was created by software developer Sean Cook, inspired by his mother’s “terrifying” dating experiences. Tea has over four million active users and topped Apple’s App Store last week.

Careless data handling

The company stored sensitive user data on Google’s cloud service Firebase, where the information became accessible to unauthorized parties. Several cybersecurity experts have criticized the company’s methods as “careless”.

— A company should never host users’ private data on a publicly accessible server, says Grant Ho, professor at the University of Chicago, to The Verge.

Andrew Guthrie Ferguson, law professor at George Washington University, warns that digital “whisper networks” lose control over sensitive information.

— What changes when it’s digital and recoverable and save-able and searchable is you lose control over it, he says.

Tea has launched an investigation together with external cybersecurity companies.

Vogue faces backlash over use of AI generated model

Published yesterday 11:44
– By Editorial Staff
The woman on the left in Vogue magazine does not exist in reality but has instead been created using AI.
2 minute read

Fashion magazine Vogue is using an AI-generated model in a new advertising campaign for clothing brand Guess. This has sparked strong reactions – from both readers and industry professionals – who warn about unrealistic beauty standards.

In the campaign, a blonde woman poses in a summer dress. The fine print reveals that the model was created by AI company Seraphinne Vallora. The criticism is extensive, with critics arguing that these ideals are unattainable – even for real models.

Wow! As if the beauty expectations weren’t unrealistic enough, here comes AI to make them impossible”, writes one person on platform X.

Some readers are so upset about the use of AI models that they are choosing to boycott the magazine because it has “lost its credibility” and are calling the practice “worrying”.

Creates unhealthy beauty standards

Fashion magazines have long been influential in shaping beauty standards, particularly for women. During the 2010s, a backlash grew against the thin “size zero” ideal. More and more publications began featuring models of different sizes within the so-called plus-size trend. Vogue, which has been described as “high fashion’s bible”, was slow to follow suit, leading to criticism. Only after pressure did the magazine begin showing greater diversity on its covers.

The use of AI models now raises concerns about new, inhuman standards, says Vanessa Longley, CEO of the organization Beat, which works against eating disorders.

If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder, she tells BBC.

Former model Sinead Bovell, who five years ago actually wrote an article about how AI models risk replacing real models, also criticizes the campaign. She questions how it might affect those working in the fashion industry, but above all believes it risks having a negative effect on people’s mental health.

Beauty standards are already being influenced by AI. There are young girls getting plastic surgery to look like a face in a filter – and now we see people who are entirely artificial, she says.

Vogue told the BBC that the AI model was an advertisement, not an editorial decision, but declined to comment further. Guess has also not commented on the criticism of its advertisement.

Lidl challenges tech giants with own cloud service for European digital freedom

Published 28 July 2025
– By Editorial Staff
German discount retailer Lidl is now launching the cloud service StackIT.
2 minute read

German discount retailer Lidl is taking an unexpected step into the tech world by launching the cloud service StackIT – an attempt to challenge Amazon and Microsoft while strengthening Europe’s digital independence. The venture marks Lidl’s ambition to reduce European dependence on foreign tech companies.

Lidl, primarily known for its grocery stores and operating in all EU countries, has through its parent company Schwarz Group – one of the world’s largest privately-owned companies – announced plans to become a player in the technology sector.

The venture is seen as a way to secure technological sovereignty. Instead of relying on American cloud services like AWS and Azure, the group is choosing to build its own digital infrastructure through subsidiary Schwarz Digits.

The cloud service StackIT is reportedly being developed as a GDPR-compliant alternative – with hopes of attracting European companies with competitive pricing.

The StackIT venture is seen as part of a broader European movement to reduce dependence on American tech giants.

Amazon and Microsoft dominate

Amazon and Microsoft currently dominate the cloud services market with enormous resources, while Schwarz Group’s investments still remain at a clearly lower level.

European players today control only about 15 percent of the regional cloud market, according to Synergy Research Group, while Amazon, Microsoft and Google control around 70 percent.

However, Lidl’s unique position as Europe’s largest retailer is something the company hopes can serve as a platform to influence the market.

If StackIT can combine Lidl’s reach with EU initiatives and tools, as well as attract companies seeking GDPR-compliant and cost-effective solutions, the cloud venture could become a catalyst for greater digital freedom within Europe.

The challenge remains enormous, but even symbolic success would send a powerful signal that Europe is serious about its technological independence.

Amazon acquires AI company that records everything you say

Mass surveillance

Published 27 July 2025
– By Editorial Staff
3 minute read

Tech giant Amazon has acquired the Swedish AI company Bee, which develops wearable devices that continuously record users’ conversations. The deal signals Amazon’s ambitions to expand within AI-driven hardware beyond its voice-controlled home assistants.

The acquisition was confirmed by Bee founder Maria de Lourdes Zollo in a LinkedIn post, while Amazon told tech site TechCrunch that the deal has not yet been completed. Bee employees have been offered positions within Amazon.

AI wristband that listens constantly

Bee, which raised €6.4 million in venture capital last year, manufactures both a standalone wristband similar to Fitbit and an Apple Watch app. The product costs €46 (approximately $50) plus a monthly subscription of €17 ($18).

The device records everything it hears – unless the user manually turns it off – with the goal of listening to conversations to create reminders and to-do lists. According to the company’s website, they want “everyone to have access to a personal, ambient intelligence that feels less like a tool and more like a trusted companion.”

Bee has previously expressed plans to create a “cloud phone” that mirrors the user’s phone and gives the device access to accounts and notifications, which would enable reminders about events or sending messages.

Competitors struggle in the market

Other companies like Rabbit and Humane AI have tried to create similar AI-driven wearable devices but so far without major success. However, Bee’s device is significantly more affordable than competitors’ – the Humane AI Pin cost €458 – making it more accessible to curious consumers who don’t want to make a large financial investment.

The acquisition marks Amazon’s interest in wearable AI devices, a different direction from the company’s voice-controlled home assistants like Echo speakers. Meanwhile, ChatGPT creator OpenAI is working on its own AI hardware, while Meta is integrating its AI into smart glasses and Apple is rumored to be working on the same thing.

Privacy concerns remain

Products that continuously record the environment carry significant security and privacy risks. Different companies have varying policies for how voice recordings are processed, stored, and used for AI training.

In its current privacy policy, Bee says users can delete their data at any time and that audio recordings are not saved, stored, or used for AI training. However, the app does store data that the AI learns about the user, which is necessary for the assistant function.

Bee has previously indicated plans to only record voices from people who have verbally given consent. The company is also working on a feature that lets users define boundaries – both based on topic and location – that automatically pause the device’s learning. They also plan to build AI processing directly into the device, which generally involves fewer privacy risks than cloud-based data processing.

However, it’s unclear whether these policies will change when Bee is integrated into Amazon. Amazon has previously had mixed results when it comes to handling user data from customers’ devices.

The company has shared video clips with law enforcement from people’s Ring security cameras without the owner’s consent or court order. Ring also reached a settlement in 2023 with the Federal Trade Commission after allegations that employees and contractors had broad and unrestricted access to customers’ video recordings.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.