Thursday, July 3, 2025

Polaris of Enlightenment

Without consent

How parents unknowingly build surveillance files on their children.

Published 3 May 2025
– By Naomi Brockwell
4 minute read

Your child’s first digital footprint isn’t made by them—it’s made by you

What does the future look like for your child?

Before they can even talk, many kids already have a bigger digital footprint than their parents did at 25.

Every ultrasound shared on Facebook.
Every birthday party uploaded to Instagram.
Every proud tweet about a funny thing they said.

Each post seems harmless—until you zoom out and realize you’re building a permanent, searchable, biometric dossier on your child, curated by you.

This isn’t fearmongering. It’s the reality of a world where data is forever.
And it’s not just your friends and family who are watching.

Your kid is being profiled before they hit puberty

Here’s the uncomfortable truth:

When you upload baby photos, you’re training facial recognition databases on their face—at every age and stage.

When you post about their interests, health conditions, or behavior, you’re populating detailed profiles that can predict who they might become.

These profiles don’t just sit idle.
They’re analyzed, bought, and sold.

By the time your child applies for a job or stands up for something they believe in, they may already be carrying a hidden score assigned by an algorithm—built on data you posted.

When their childhood data comes back to haunt them

Imagine your child years from now, applying for a travel visa, a job, or just trying to board a flight.

A background check pulls information from facial recognition databases and AI-generated behavior profiles—flagging them for additional scrutiny based on “historic online associations”.

They’re pulled aside. Interrogated. Denied entry. Or worse, flagged permanently.

Imagine a future law that flags people based on past “digital risk indicators”—and your child’s online record becomes a barrier to accessing housing, education, or financial services.

Insurance companies can use their profile to label them a risky customer.

Recruiters might quietly filter them out based on years-old digital behavior.

Not because they did something wrong—but because of something you once shared.

Data doesn’t disappear.
Governments change. Laws evolve.
But surveillance infrastructure rarely gets rolled back.

And once your child’s data is out there, it’s out there forever.
Feeding systems you’ll never see.
Controlled by entities you’ll never meet.

For purposes you’ll never fully understand.

The rise of biometric surveillance—and why it targets kids first

Take Discord’s new AI selfie-based age verification. To prove they’re 13+, children are encouraged to submit selfies—feeding sensitive biometric data into AI systems.

You can change your password. You can’t change your face.

And yet, we’re normalizing the idea that kids should hand over their most immutable identifiers just to participate online.

Some schools already collect facial scans for attendance. Some toys use voice assistants that record everything your child says.

Some apps marketed as “parental control” tools grant third-party employees backend access to your child’s texts, locations—even live audio.

Ask yourself: Do you trust every single person at that company with your child’s digital life?

“I know you love me, and would never do anything to harm me…”

In the short film Without Consent, by Deutsche Telekom, a future version of a young girl named Ella speaks directly to her parents. She pleads with them to protect her digital privacy before it’s too late.

She imagines a future where:

  • Her identity is stolen.
  • Her voice is cloned to scam her mom into sending money.
  • Her old family photo is turned into a meme, making her a target of school-wide bullying.
  • Her photos appear on exploitation sites—without her knowledge or consent.

It’s haunting because it’s plausible.

This is the world we’ve built.
And your child’s data trail—your posts—is the foundation.

The most powerful privacy lesson you can teach? How you live online.

Children learn how to navigate the digital world by watching you.

What are you teaching them if you trade their privacy for likes?

The best gift you can give them isn’t a new device—it’s the mindset and tools to protect themselves in a world that profits from their exposure.

Even “kid-safe” tech often betrays that trust.

Baby monitors have leaked footage.

Tracking apps have glitched and exposed locations of random children (yes, really).

Schools collect and store sensitive information with barely any safeguards—and breaches happen all the time.

How to protect your child’s digital future

Stop oversharing
Avoid posting photos, birthdays, locations, or anecdotes about your child online—especially on platforms that monetize engagement.

Ditch spyware apps
Instead of surveillance, foster open dialogue. If monitoring is necessary, choose open-source, self-hosted tools where you control the data—not some faceless company.

Teach consent early
Help your child understand that their body, thoughts, and information are theirs to control. Make digital consent a family value.

Opt out of biometric collection
Say no to tools that demand selfies, facial scans, or fingerprints. Fight back against the normalization of biometric surveillance for kids.

Use aliases and VoIP numbers
When creating accounts for your child, use email aliases and VoIP numbers to avoid linking their real identity across platforms.

Push schools and apps for better policies
Ask your child’s school: What data do they collect? Who has access? Is it encrypted?
Push back on apps that demand unnecessary permissions. Ask hard questions.

This isn’t paranoia—it’s parenting in the digital age

This is about protecting your child’s right to grow up without being boxed in by their digital past.

About giving them the freedom to explore ideas, try on identities, and make mistakes—without it becoming a permanent record.

Privacy is protection.
It’s dignity.
It’s autonomy.

And it’s your job to help your child keep it.
Let’s give the next generation a chance to write their own story.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Spotify fills playlists with fake music – while CEO invests millions in military AI

The future of AI

Published 1 July 2025
– By Editorial Staff
Spotify CEO Daniel Ek accused of diverting artist royalties to military AI development.
3 minute read

Swedish streaming giant Spotify promotes anonymous pseudo-musicians and computer-generated music to avoid paying royalties to real artists, according to a new book by music journalist Liz Pelly.

Meanwhile, criticism grows against Spotify CEO Daniel Ek, who recently invested over €600 million in a company developing AI technology for future warfare.

In the book Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, Liz Pelly reveals that Spotify has long been running a secret internal program called Perfect Fit Content (PFC). The program creates cheap, generic background music – often called “muzak” – through a network of production companies with ties to Spotify. This music is then placed in Spotify’s popular playlists, often without crediting any real artists.

The program was tested as early as 2010 and is described by Pelly as Spotify’s most profitable strategy since 2017.

“But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which – as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler – the relationship between listener and artist might be severed completely”, Pelly writes.

By 2023, the PFC program controlled hundreds of playlists. More than 150 of them – with names like Deep Focus, Cocktail Jazz, and Morning Stretch – consisted entirely of music produced within PFC.

“Only soulless AI music will remain”

A jazz musician told Pelly that Spotify asked him to create an ambient track for a few hundred dollars as a one-time payment. However, he couldn’t retain the rights to the music. When the track later received millions of plays, he realized he had likely been deceived.

Social media criticism has been harsh. One user writes: “In a few years, only soulless AI music will remain. It’s an easy way to avoid paying royalties to anyone.”

“I deleted Spotify and cancelled my subscription”, comments another.

Spotify has previously faced criticism for similar practices. The Guardian reported in February that the company’s Discovery Mode system allows artists to gain more visibility – but only if they agree to receive 30 percent less payment.

Spotify’s CEO invests in AI for warfare

Meanwhile, CEO Daniel Ek has faced severe criticism for investing over €600 million through his investment firm Prima Materia in the German AI company Helsing. The company develops software for drones, fighter aircraft, submarines, and other military systems.

– The world is being tested in more ways than ever before. That has sped up the timeline. There’s an enormous realisation that it’s really now AI, mass and autonomy that is driving the new battlefield, Ek commented in an interview with Financial Times.

With this investment, Ek has also become chairman of Helsing. The company is working on a project called Centaur, where artificial intelligence will be used to control fighter aircraft.

The criticism was swift. Australian producer Bluescreen explained in an interview with music site Resident Advisor why he chose to leave Spotify – a decision several other music creators have also made.

– War is hell. There’s nothing ethical about it, no matter how you spin it. I also left because it became apparent very quickly that Spotify’s CEO, as all billionaires, only got rich off the exploitation of others.

Competitor chooses different path

Spotify has previously been questioned for its proximity to political power. The company donated $150,000 to Donald Trump’s inauguration fund in 2017 and hosted an exclusive brunch the day before the ceremony.

While Spotify is heavily investing in AI-generated music and voice-controlled DJs, competitor SoundCloud has chosen a different path.

– We do not develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes, explains communications director Marni Greenberg.

– In fact, we implemented technical safeguards, including a ‘no AI’ tag on our site to explicitly prohibit unauthorised use.

FUTO – the obvious choice for privacy-friendly voice and text input on mobile devices

Advertising partnership with Teuton Systems

Ditch Google's input apps and keep what you type and say on your phone.

Published 1 July 2025
3 minute read

In our series about open, surveillance-free apps, we take a closer look at FUTO Voice Input and FUTO Keyboard – two apps that together challenge the established alternatives for voice input and keyboards on mobile devices. Most smartphone users are accustomed to dictating text using Google or using standard keyboards like Gboard or SwiftKey.

However, few consider that these popular tools often collect what you say and write privately, sending it to tech giants. The FUTO team themselves emphasize that their solution completely eliminates this problem – everything runs locally on the device without any data leaving the phone (offline with no connection requirements).

Here’s what the FUTO apps offer:

  • Privacy focus: FUTO apps run completely offline – no data is sent to the cloud.
  • Full functionality: Swipe typing, text suggestions, autocorrection, and voice-to-text with punctuation – everything works without internet connection (all keyboard functions available offline).
  • High precision: Offline dictation using advanced AI model (OpenAI Whisper) provides fast and accurate transcription (local voice recognition with high accuracy).
  • Multilingual support: Support for many languages and continuous improvements via the open-source community.

FUTO Keyboard

On the keyboard front, FUTO Keyboard impresses by delivering modern convenience without compromising privacy. Unlike conventional keyboards that constantly transmit user data, FUTO requires neither network access nor cloud services – yet it offers features on par with the best.

You can swipe words with your finger across the screen, get relevant text suggestions and automatic spell correction, and customize the theme to your liking – all while the app consistently refuses to send a single keystroke to any external server (all data stays with you). FUTO Keyboard also integrates FUTO Voice Input through a built-in microphone button, allowing ‘speech to text’ to be activated from the same interface.

FUTO Voice Input

For voice input, we have FUTO Voice Input that lets you dictate text directly in apps like messages or notes – completely without internet connection. All processing happens locally using a compact language model, meaning no audio needs to be sent away to become text. According to users who have compared it with Google’s cloud-based solution, FUTO can keep pace and even surpass it in both speed and accurate grammar.

An enthusiastic tester reported that FUTO provided a completely new experience – no delays or strange autocorrections that he previously suffered from with Gboard. This means you can safely speak freely and see the text appear almost immediately, without worrying about unauthorized “listening” on the other end.

Ongoing development and alternatives

Despite FUTO Keyboard being young, it’s already surprisingly capable. The interface feels polished and user-friendly, and the amount of features makes it almost comparable to established alternatives. Currently, text input works excellently in English, while support for smaller languages like Swedish is still being refined. However, development pace is high and the team behind FUTO has announced improvements specifically to autocorrection and expanded language support in upcoming updates. Moreover, global collaboration is encouraged: since the source code is open, engaged developers and users can contribute improvements and new language data to the project.

Among free alternatives, there’s Sayboard, an open source keyboard using Vosk for speech recognition. For pure keyboards, there’s AnySoftKeyboard and FlorisBoard, which are excellent from a privacy perspective but lack some of the advanced features that FUTO offers in one package (especially built-in voice input).

An essential part of the Matrix Phone ecosystem

FUTO Voice Input and Keyboard demonstrate that you can combine the best of both worlds: the convenience of smart text and voice functions, and the security of keeping your data private. For users of Teuton Systems’ Matrix Phone (GrapheneOS phone), these apps come pre-installed as part of the privacy-secure ecosystem. But they’re available to everyone – via Google Play or F-Droid – and constitute a highly recommended switch for anyone who values their privacy in everyday life.

As a tech writer recently put it: you no longer need to choose between functionality and security – with FUTO you get both without compromises.

Swedish regional healthcare app run by chatbot makes serious errors

Published 30 June 2025
– By Editorial Staff
In one documented case, the app classified an elderly man's symptoms as mild - he died the following day.
2 minute read

An AI-based healthcare app used by the Gävleborg Regional Healthcare Authority in Sweden is now under scrutiny following serious assessment errors. In one notable case, an elderly man’s condition was classified as mild – he died the following day.

Healthcare staff are raising alarms about deficiencies deemed to threaten patient safety, and the app is internally described as a “disaster”.

Min vård Gävleborg (My Healthcare Gävleborg) is used when residents seek digital healthcare or call 1177 (Sweden’s national healthcare advice line). A chatbot asks questions to make an initial medical assessment and then refers the patient to an appropriate level of care. However, according to several doctors in the region, the system is not functioning safely enough.

In one documented case, the app classified an elderly man’s symptoms as mild. He died the following day. An incident report shows that the prioritization was incorrect, although it couldn’t be established that this directly caused the death.

In another case, an inmate at the Gävle Correctional Facility sought care for breathing difficulties – but was referred to a chat with a doctor in Ljusdal, instead of being sent to the emergency room.

– She should obviously have been sent to the emergency room, says Elisabeth Månsson Rydén, a doctor in Ljusdal and board member of the Swedish Association of General Medicine in Gävleborg, speaking to the tax-funded SVT.

“Completely insane”

Criticism from healthcare staff is extensive. Several doctors warn that the app underestimates serious symptoms, which could have life-threatening consequences. Meanwhile, there are examples of the opposite – where patients are given too high priority – which risks unnecessarily burdening healthcare services and causing delays for severely ill patients.

– Doctors have expressed in our meetings that Min vård Gävleborg is a disaster. This is completely insane, says Månsson Rydén.

Despite the death incident, Region Gävleborg has chosen not to report the event to either the Health and Social Care Inspectorate (IVO) or the Swedish Medical Products Agency.

– We looked at the case and decided it didn’t need to be reported, says Chief Medical Officer Agneta Larsson.

Other regions have reacted

The app was developed by Platform24, a Swedish company whose digital systems are used in several regions. In Västra Götaland Region, the app was paused after a report showed that three out of ten patients were assessed incorrectly. In Region Östergötland, similar deficiencies have led to a report to the Swedish Medical Products Agency. An investigation is ongoing.

Despite this, Agneta Larsson defends the version used in Gävleborg:

– We have reviewed our own system, and we cannot see these errors.

Platform24 has declined to be interviewed, but in a written response to Swedish Television, the company’s Medical Director Stina Perdahl defends the app’s basic principles.

“For patient safety reasons, the assessment is deliberately designed to be a bit more cautious initially”, it is claimed.

Your TV is spying on you

Your TV is taking snapshots of everything you watch.

Published 28 June 2025
– By Naomi Brockwell
6 minute read

You sit down to relax, put on your favorite show, and settle in for a night of binge-watching. But while you’re watching your TV… your TV is watching you.

Smart TVs take constant snapshots of everything you watch. Sometimes hundreds of snapshots a second.

Welcome to the future of “entertainment”.

What’s actually happening behind the screens?

Smart TVs are just modern TVs. It’s almost impossible to buy a non-smart TV anymore. And they’re basically just oversized internet-connected computers. They come preloaded with apps like Amazon Prime Video, YouTube, and Hulu.

They also come preloaded with surveillance.

recent study from UC Davis researchers tested TVs from Samsung and LG, two of the biggest players in the market, and came across something known as ACR: Automatic Content Recognition.

What is ACR and why should you care?

ACR is a surveillance technology built into the operating systems of smart TVs. This system takes continuous snapshots of whatever is playing to identify exactly what is on the screen.

LG’s privacy policy states they take a snapshot every 10 milliseconds. That’s 100 per second.
Samsung does it every 500 milliseconds.

From these snapshots, the TV generates a content fingerprint and sends it to the manufacturer. That fingerprint is then matched against a massive database to figure out exactly what you’re watching.

Let that sink in. Your television is taking snapshots of everything you’re watching.

And it doesn’t just apply to shows you’re watching on the TV. Even if you plug in your laptop and use the TV as a dumb monitor, it’s still taking snapshots.

  • Zoom calls
  • Emails
  • Banking apps
  • Personal photos

Audio or video snapshots, or sometimes both, are being collected of all of it.

Currently, the way ACR works, the snapshots themselves are not necessarily sent off-device, but your TV is still collecting them. And we all know that AI is getting better and better. It’s now possible for AI to identify everything in a video or photo: faces, emotions, background details.

As the technology continues to improve, we should presume that TVs will move from fingerprint-based ACR to automatic AI-driven content recognition.

As Toby Lewis from Darktrace told The Guardian:

“Facial recognition, speech-to-text, content analysis—these can all be used together to build an in-depth picture of an individual user”.

This is where we’re headed.

This data doesn’t exist in a vacuum

TV manufacturers don’t just sit on this data. They monetize it.

Viewing habits are combined with data from your other devices: phones, tablets, smart fridges, wearables. Then it’s sold to third parties. Advertisers. Data brokers. Political campaigns.

One study found that almost every TV they tested contacted Netflix servers, even when no Netflix account was configured.

So who’s getting your data?

We don’t know. That’s the point.

How your data gets weaponized

Let’s say your TV learns that:

  • You watch sports every Sunday
  • You binge true crime on weekdays
  • You play YouTube fashion hauls in the afternoons

These habits are then tied to a profile of your IP address, email, and household.

Now imagine that profile combined with:

  • Your Amazon purchase history
  • Your travel patterns
  • Your social media behavior
  • Your voting record

That’s the real goal: total psychological profiling. Knowing not just what you do, but what you’re likely to do. What you’ll buy, how you’ll vote, who you’ll trust.

In other words, your smart TV isn’t just spying.

It’s helping others manipulate you.

Why didn’t I hear about this when I set up my TV?

Because they don’t want you to know.

When TV manufacturers first started doing this, they never informed users. The practice slipped quietly by.

A 2017 FTC lawsuit revealed that Vizio was collecting viewing data from 11 million TVs and selling it without ever getting user consent.

These days, companies technically include “disclosures” in their Terms of Service. But they’re buried under vague names like:

  • “Viewing Information Services”
  • “Live Plus”
  • “Personalized Experiences”

Have you ever actually read those menus? Didn’t think so.

These aren’t written to inform you. They’re written to shield corporations from lawsuits.

If users actually understood what was happening, many would opt out entirely. But the system is designed to confuse and hide from you the truth that surveillance devices entered our living rooms and bedrooms without us realizing.

Researchers are being silenced

Not only are these systems intentionally opaque and confusing, companies design them to discourage scrutiny.

And when researchers try to investigate these systems, they hit two major roadblocks:

  1. Technical – Jailbreaking modern Smart TVs is nearly impossible. Their systems are locked down, and the code is proprietary.
  2. Legal – Researchers who attempt to reverse-engineer modern-day tech risk being sued under the Computer Fraud and Abuse Act (CFAA), a vague and outdated law that doesn’t distinguish between malicious actors and researchers trying to inform the public.

As a result, most of what we know about these TVs comes from inference. Guessing what’s happening by watching network traffic, since direct access is often blocked.

That means most of this surveillance happens in the dark. Unchallenged, unverified, and largely unnoticed.

We need stronger protections for privacy researchers, clearer disclosures for users, and real pressure on companies to stop hiding behind complexity.

Because if we can’t see what the tech is doing, we can’t choose to opt out.

What you can do

Here are the most effective steps you can take to protect your privacy:

1. Don’t connect your TV to the internet.
If you keep the Wi-Fi off, the TV can’t send data to manufacturers or advertisers. Use a laptop or trusted device for streaming instead. If the TV stays offline forever, the data it collects never leaves the device.

2. Turn off ACR settings.
Dig through the menus and disable everything related to viewing info, advertising, and personalization. Look for settings like “Live Plus” or “Viewing Information Services.” Be thorough. These options are often buried.

3. Use dumb displays.
It’s almost impossible to buy a non-smart TV today. The market is flooded with “smart” everything. But a few dumb projectors still exist. And some monitors are safer too, though they don’t go to TV sizes yet.

4. Be vocal.
Ask hard questions when buying devices. Demand that manufacturers disclose how they use your data. Let them know that privacy matters to you.

5. Push for CFAA reform.
The CFAA is being weaponized to silence researchers who try to expose surveillance. If we want to understand how our tech works, researchers must be protected, not punished. We need to fight back against these chilling effects and support organizations doing this work.

The Ludlow Institute is now funding researchers who reverse-engineer surveillance tech. If you’re a researcher, or want to support one, get in touch.

This is just one piece of the puzzle

Smart TVs are among the most aggressive tracking devices in your home. But they’re not alone. Nearly every “smart” device has the same capabilities to build a profile on you: phones, thermostats, lightbulbs, doorbells, fridges.

This surveillance has been normalized. But it’s not normal.

We shouldn’t have let faceless corporations and governments into our bedrooms and living rooms. But now that they’re here, we have to push back.

That starts with awareness. Then it’s up to us to make better choices and help others do the same.

Let’s take back our homes.
Let’s stop normalizing surveillance.

Because privacy isn’t extreme.
It’s common sense.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.