Saturday, June 7, 2025

Polaris of Enlightenment

Opt-in childhood

What we signed them up for before they could object.

Published today 7:48
– By Naomi Brockwell
6 minute read

A few weeks ago, we published an article about oversharing on social media, and how posting photos, milestones, and personal details can quietly build a digital footprint for your child that follows them for life.

But social media isn’t the only culprit.

Today, I want to talk about the devices we give our kids: the toys that talk, the tablets that teach, the monitors that watch while they sleep.

These aren’t just tools of convenience or connection. Often, they’re Trojan horses, collecting and transmitting data in ways most parents never realize.

We think we’re protecting our kids.
But in many cases, we’re signing them up for surveillance systems they can’t understand, and wouldn’t consent to if they could.

How much do you know about the toys your child is playing with?

What data are they collecting?
With whom are they sharing it?
How safely are they storing it to protect against hackers?

Take VTech, for example — a hugely popular toy company, marketed as safe, educational, and kid-friendly.

In 2015, VTech was hacked. The breach wasn’t small:

  • 6.3 million children’s profiles were exposed, along with nearly 5 million parent accounts
  • The stolen data included birthdays, home addresses, chat logs, voice recordings… even photos children had taken on their tablets

Terms no child can understand—but every parent accepts

It’s not just hackers we should be mindful of — often, these companies are allowed to do almost anything they want with the data they collect, including selling it to third parties.

When you hand your child a toy that connects to Wi-Fi or Bluetooth, you might be agreeing to terms that say:

  • Their speech can be used for targeted advertising
  • Their conversations may be retained indefinitely
  • The company can change the terms at any time, without notice

And most parents will never know.

“Safe” Devices With Open Doors

What about things like baby monitors and nanny cams?

Years ago, we did a deep dive into home cameras, and almost all popular models were built without end-to-end encryption. That means the companies that make them can access your video feed.
How much do you know about that company?
How well do you trust every employee who might be able to access that feed?

But it’s not just insiders you should worry about.
Many of these kiddy cams are notoriously easy to hack. The internet is full of real-world examples of strangers breaking into monitors, watching, and even speaking to infants.

There are even publicly available tools that scan the internet and map thousands of unsecured camera feeds, sortable by country, type, and brand.
If your monitor isn’t properly secured, it’s not just vulnerable — it’s visible.

Mozilla, through its Privacy Not Included campaign, audited dozens of smart home devices and baby monitors. They assessed whether products had basic security features like encryption, secure logins, and clear data-use policies. The verdict? Even many top-selling monitors had zero safeguards in place.

These are the products we’re told are protecting our kids.

Apps that glitch, and let you track other people’s kids

A T-Mobile child-tracking app recently glitched.
A mother refreshed the screen—expecting to see her kids’ location.
Instead, she saw a stranger’s child. Then another. Then another.

Each refresh revealed a new kid in real time.

The app was broken, but the consequences weren’t abstract.
That’s dozens of children’s locations broadcast to the wrong person.
The feature that was supposed to provide control did the opposite.

Schools are part of the problem, too

Your child’s school likely collects and stores sensitive data—without strong protections or meaningful consent.

  • In Virginia, thousands of student records were accidentally made public
  • In Seattle, a mental health survey led to deeply personal data being stored in unsecured systems

And it’s not just accidents.

A 2015 study investigated “K–12 data broker” marketplaces that trade in everything from ethnicity and affluence to personality traits and reproductive health status.
Some companies offer data on children as young as two.
Others admit they’ve sold lists of 14- and 15-year-old girls for “family planning services.”

Surveillance disguised as protection

Let’s be clear: the internet is a minefield, filled with ways children can be tracked, profiled, or preyed upon. Protecting them is more important than ever.

One category of tools that’s exploded in popularity is the parental control app—software that lets you see everything happening on your child’s device:
The messages they send. The photos they take. The websites they visit.

The intention might be good. But the execution is often disastrous.

Most of these apps are not end-to-end encrypted, meaning:

  • Faceless companies gain full access to your child’s messages, photos, and GPS
  • They operate in stealth mode, functionally indistinguishable from spyware
  • And they rarely protect that data with strong security

Again, how much do you know about these companies?
And even if you trust them, how well are they protecting this data from everyone else?

The “KidSecurity” app left 300 million records exposed, including real-time child locations and fragments of parent credit cards.
The “mSpy” app leaked private messages and movement histories in multiple breaches.

When you install one of these apps, you’re not just gaining access to your child’s world.
So is the company that built it… and everyone they fail to protect it from.

What these breaches really teach us

Here’s the takeaway from all these hacks and security failures:

Tech fails.

We don’t expect it to be perfect.
But when the stakes are this high — when we’re talking about the private lives of our children — we should be mindful of a few things:

1) Maybe companies shouldn’t be collecting so much information if they can’t properly protect it.
2) Maybe we shouldn’t be so quick to hand that information over in the first place.

When the data involves our kids, the margin for error disappears.

Your old phone might still be spying

Finally, let’s talk about hand-me-downs.

When kids get their first phone, it’s often filled with tracking, sharing, and background data collection from years of use. What you’re really passing on may be a lifetime of surveillance baked into the settings.

  • App permissions often remain intact
  • Advertising IDs stay tied to previous behavior
  • Pre-installed tracking software may still be active

The moment it connects to Wi-Fi, that “starter phone” might begin broadcasting location data and device identifiers — linked to both your past and your child’s present.

Don’t opt them in by default: 8 ways to push back

So how do we protect children in the digital age?

You don’t need to abandon technology. But you do need to understand what it’s doing, and make conscious choices about how much of your child’s life you expose.

Here are 8 tips:

1: Stop oversharing
Data brokers don’t wait for your kid to grow up. They’re already building the file.
Reconsider publicly posting their photos, location, and milestones. You’re building a permanent, searchable, biometric record of your child—without their consent.
If you want to share with friends or family, do it privately through tools like Signal stories or Ente photo sharing.

2: Avoid spyware
Sometimes the best way to protect your child is to foster a relationship of trust, and educate them about the dangers.
If monitoring is essential, use self-hosted tools. Don’t give third parties backend access to your child’s life.

3: Teach consent
Make digital consent a part of your parenting. Help your child understand their identity—and that it belongs to them.

4: Use aliases and VoIP numbers
Don’t link their real identity across platforms. Compartmentalization is protection.

5: Audit tech
Reset hand-me-down devices. Remove unnecessary apps. Disable default permissions.

6: Limit permissions
If an app asks for mic or camera access and doesn’t need it—deny it. Always audit.

7: Set boundaries with family
Ask relatives not to post about your child. You’re not overreacting—you’re defending someone who can’t yet opt in or out.

8: Ask hard questions
Ask your school how data is collected, stored, and shared. Push back on invasive platforms. Speak up when things don’t feel right.

Let Them Write Their Own Story

We’re not saying throw out your devices.
We’re saying understand what they really do.

This isn’t about fear. It’s about safety. It’s about giving your child the freedom to grow up and explore ideas without every version of themselves being permanently archived, and without being boxed in by a digital record they never chose to create.

Our job is to protect that freedom.
To give them the chance to write their own story.

Privacy is protection.
It’s autonomy.
It’s dignity.

And in a world where data compounds, links, and lives forever, every choice you make today shapes the freedom your child has tomorrow.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

AI surveillance in Swedish workplaces sparks outrage

Mass surveillance

Published 4 June 2025
– By Editorial Staff
In practice, it is possible to analyze not only employees' productivity - but also their facial expressions, voices and emotions.
2 minute read

The rapid development of artificial intelligence has not only brought advantages – it has also created new opportunities for mass surveillance, both in society at large and in the workplace.

Even today, unscrupulous employers use AI to monitor and map every second of their employees’ working day in real time – a development that former Social Democratic politician Kari Parman warns against and calls for decisive action to combat.

In an opinion piece in the Stampen-owned newspaper GP, he argues that AI-based surveillance of employees poses a threat to staff privacy and calls on the trade union movement to take action against this development.

Parman paints a bleak picture of how AI is used to monitor employees in Swedish workplaces, where technology analyzes everything from voices and facial expressions to productivity and movement patterns – often without the employees’ knowledge or consent.

It’s a totalitarian control system – in capitalist packaging”, he writes, continuing:

There is something deeply disturbing about the idea that algorithms will analyze our voices, our facial expressions, our productivity – second by second – while we work”.

“It’s about power and control”

According to Parman, there is a significant risk that people in digital capitalism will be reduced to mere data points, giving employers disproportionate power over their employees.

He sees AI surveillance as more than just a technical issue and warns that this development undermines the Swedish model, which is based on balance and respect between employers and employees.

It’s about power. About control. About squeezing every last ounce of ‘efficiency’ out of people as if we were batteries”.

If trade unions fail to act, Parman believes, they risk becoming irrelevant in a working life where algorithms are taking over more and more of the decision-making.

To stop this trend, he lists several concrete demands. He wants to see a ban on AI-based individual surveillance in the workplace and urges unions to introduce conditions in collective agreements to review and approve new technology.

Kari Parman previously represented the Social Democrats in Gnosjö. Photo: Kari Parman/FB

“Reduced to an algorithm’s margin of error”

He also calls for training for safety representatives and members, as well as political regulations from the state.

No algorithm should have the right to analyze our performance, movements, or feelings”, he declares.

Parman emphasizes that AI surveillance not only threatens privacy but also creates a “psychological iron cage” where employees constantly feel watched, blurring the line between work and private life.

At the end of the article, the Social Democrat calls on the trade union movement to take responsibility and lead the resistance against the misuse of AI in the workplace.

He sees it as a crucial issue for the future of working life and human dignity at work.

If we don’t stand up now, we will be alone when it is our turn to be reduced to an algorithm’s margin of error”, he concludes.

AI agents succumb to peer pressure

Published 2 June 2025
– By Editorial Staff
Even marginal variations in training data can cause significant differences in how language models behave in group interactions.
3 minute read

A new study shows that social AI agents, despite being programmed to act independently, quickly begin to mimic each other and succumb to peer pressure.

Instead of making their own decisions, they begin to uncritically adapt their responses to the herd even without any common control or plan.

– Even if they are programmed for something completely different, they can start coordinating their behavior just by reacting to each other, says Andrea Baronchelli, professor of complex systems at St George’s University of London.

An AI agent is a system that can perform tasks autonomously, often using a language model such as ChatGPT. In the study, the researchers investigated how such agents behave in groups.

And the results are surprising: even without an overall plan or insight, the agents began to influence each other – and in the end, almost the entire group gave the same answer.

– It’s easy to test a language model and think: this works. But when you release it together with others, new behaviors emerge, Baronchelli explains.

“A small minority could tip the whole system”

The researchers also studied what happens when a minority of agents stick to a deviant answer. Slowly but surely, the other agents began to change their minds. When enough had changed their minds – a point known as critical mass – the new answer spread like a wave through the entire group. The phenomenon is similar to how social movements or revolutions can arise in human societies.

It was unexpected that such a small minority could tip the whole system. This is not a planned collaboration but a pattern that emerges spontaneously, the researcher told Swedish public television SVT.

AI agents are already used today on social media, for example in comment fields, automatic responses, or texts that mimic human language. But when one agent is influenced by another, which in turn has been influenced by a third, a chain reaction occurs. This can lead to false information spreading quickly and on a large scale.

– We often trust repetition. But in these systems, we don’t know who said what first. It becomes like an echo between models, says Anders Sandberg, a computer scientist at the Institute for Future Studies.

Lack of transparency

Small differences in how a language model is trained can lead to large variations in behavior when the models interact in a group. Predicting and preventing unwanted effects requires an overview of all possible scenarios something that is virtually impossible in practice. At the same time, it is difficult to hold anyone accountable: AI agents spread extremely quickly, their origins are often difficult to trace, and there is limited insight into how they are developed.

It is the companies themselves that decide what they want to show. When the technology is closed and commercial, it becomes impossible to understand the effects – and even more difficult to defend against them, Sandberg notes.

The study also emphasizes the importance of understanding how AI agents behave as a collective something that is often overlooked in technical and ethical discussions about AI.

– The collective aspect is often missing in today’s AI thinking. It’s time to take it seriously, urges Andrea Baronchelli.

Apple sued over iPhone eavesdropping – users may get payouts

Published 1 June 2025
– By Editorial Staff
Apple has denied any wrongdoing - but finally agreed to pay $95 million in a settlement.
3 minute read

Apple’s voice assistant Siri was activated without commands and recorded sensitive conversations – recordings that were also allegedly shared with other companies.

Now users in the US can get compensation – even if it’s relatively small amounts.

Technology giant Apple was caught in the crossfire after it was discovered that its voice assistant, Siri, recorded private conversations without users’ knowledge. The company has agreed to pay $95 million in a settlement reached in December last year, following a class action lawsuit alleging privacy violations.

The lawsuit was filed in 2021 by California resident Fumiko Lopez along with other Apple users. They stated that Siri-enabled devices recorded conversations without users first intentionally activating the voice assistant by saying “Hello, Siri” or pressing the side button.

According to the allegations, the recordings were not only used to improve Siri, but were also shared with third-party contractors and other actors – without users’ consent. It is also alleged that the information was used for targeted advertising, in violation of both US privacy laws and Apple’s own privacy policy.

However, Apple has consistently denied the allegations and claims that its actions were neither “wrong nor illegal”. However, paying such a large sum to avoid further litigation has raised questions about what may have been hidden from the public.

Users can claim compensation

Individuals who owned a Siri-enabled Apple product – such as an iPhone, iPad, Apple Watch, MacBook, iMac, HomePod, iPod touch or Apple TV – between September 17, 2014 and December 31, 2024, and who live in the United States or a U.S. territory, may now be entitled to compensation.

However, to qualify, one must certify that Siri was inadvertently activated during a call that was intended to be private or confidential.

The reimbursement applies to up to five devices, with a cap of $20 per device – totaling up to $100 per person. The exact amount per user will be determined once all claims have been processed.

Applications must be submitted by July 2, 2025, and those eligible may have already received an email or physical letter with an identification code and confirmation code. Those who haven’t received anything but still think they qualify can instead apply for reimbursement via the settlement’s website – if you provide the model and serial number of your devices.

How to protect yourself from future interception

Users who want to strengthen their privacy can limit Siri’s access themselves in the settings:

  • Turn off Improve Siri: Go to Settings > Privacy & Security > Analytics & Improvements and disable Improve Siri & Dictation.
  • Delete Siri history: Go to Settings > Siri > Siri & Dictation History and select Delete Siri & Dictation History.
  • Turn off Siri completely: Go to Settings > Siri > Listen for “Hey Siri”, turn it off, then go to Settings > General > Keyboard and disable Enable Dictation.

Apple describes more privacy settings on its website, such as how to restrict Siri’s access to location data or third-party apps. But in the wake of the scandal, critics say that you shouldn’t blindly trust companies’ promises of data protection – and that the only way to truly protect your privacy is to take matters into your own hands.

KYC is the crime

The Coinbase hack shows how state-mandated surveillance is putting lives at risk.

Published 31 May 2025
– By Naomi Brockwell
4 minute read

Last week, Coinbase got hacked.

Hackers demanded a $20 million ransom after breaching a third-party system. They didn’t get passwords or crypto keys. But what they did get will put lives at risk:

  • Names
  • Home addresses
  • Phone numbers
  • Partial Social Security numbers
  • Identity documents
  • Bank info

That’s everything someone needs to impersonate you, blackmail you, or show up at your front door.

This isn’t hypothetical. There’s a growing wave of kidnappings and extortion targeting people with crypto exposure. Criminals are using leaked identity data to find victims and hold them hostage.

Let’s be clear: KYC doesn’t just put your data at risk. It puts people at risk.

Naturally, people are furious at any company that leaks their information.

But here’s the bigger issue:
No system is unhackable.
Every major institution, from the IRS to the State Department, has suffered breaches.
Protecting sensitive data at scale is nearly impossible.

And Coinbase didn’t want to collect this data.
Many companies don’t. It’s a massive liability.
They’re forced to, by law.

A new, dangerous normal

KYC, Know Your Customer, has become just another box to check.

Open a bank account? Upload your ID.
Use a crypto exchange? Add your selfie and utility bill.
Sign up for a payment app? Same thing.

But it wasn’t always this way.

Until the 1970s, you could walk into a bank with cash and open an account. Your financial life was private by default.

That changed with the Bank Secrecy Act of 1970, which required banks to start collecting and reporting customer activity to the government. Still, KYC wasn’t yet formalized. Each bank decided how well they needed to know someone. If you’d been a customer since childhood, or had a family member vouch for you, that was often enough.

Then came the Patriot Act, which turned KYC into law. It required every financial institution to collect, verify, and store identity documents from every customer, not just for large or suspicious transactions, but for basic access to the financial system.

From that point on, privacy wasn’t the default. It was erased.

The real-world cost

Today, everyone is surveilled all the time.
We’ve built an identity dragnet, and people are being hurt because of it.

Criminals use leaked KYC data to find and target people, and it’s not just millionaires. It’s regular people, and sometimes their parents, partners, or even children.

It’s happened in London, Buenos Aires, Dubai, Lagos, Los Angeles, all over the world.
Some are robbed. Some are held for ransom.
Some don’t survive.

These aren’t edge cases. They’re the direct result of forcing companies to collect and store sensitive personal data.

When we force companies to hoard identity data, we guarantee it will eventually fall into the wrong hands.

There are two types of companies, those that have been hacked, and those that don’t yet know they’ve been hacked” – former Cisco CEO, John Chambers

What KYC actually does

KYC turns every financial institution into a surveillance node.
It turns your personal information into a liability.

It doesn’t just increase risk — It creates it.

KYC is part of a global surveillance infrastructure. It feeds into databases governments share and query without your knowledge. It creates chokepoints where access to basic services depends on surrendering your privacy. And it deputizes companies to collect and hold sensitive data they never wanted.

If you’re trying to rob a vault, you go where the gold is.
If you’re trying to target people, you go where the data lives.

KYC creates those vaults, legally mandated, poorly secured, and irresistible to attackers.

Does it even work?

We’re told KYC is necessary to stop terrorism and money laundering.

But the top reasons banks file “suspicious activity reports” are banal, like someone withdrawing “too much” of their own money.

We’re told to accept this surveillance because it might stop a bad actor someday.

In practice, it does more to expose innocent people than to catch criminals.

KYC doesn’t prevent crime.
It creates the conditions for it.

A Better Path Exists

We don’t have to live like this.

Better tools already exist, tools that allow verification without surveillance:

  • Zero-Knowledge Proofs (ZKPs): Prove something (like your age or citizenship) without revealing documents
  • Decentralized Identity (DID): You control what gets shared, and with whom
  • Homomorphic Encryption: Allows platforms to verify encrypted data without ever seeing it

But maybe it’s time to question something deeper.
Why is centralized, government-mandated identity collection the foundation of participation in financial life?

This surveillance regime didn’t always exist. It was built.

And just because it’s now common doesn’t mean we should accept it.

We didn’t need it before. We don’t need it now.

It’s time to stop normalizing mass surveillance as a condition for basic financial access.

The system isn’t protecting us.
It’s putting us in danger.

It’s time to say what no one else will

KYC isn’t a necessary evil.
It’s the original sin of financial surveillance.

It’s not a flaw in the system.
It is the system.

And the system needs to go.

Takeaways

  • Check https://HaveIBeenPwned.com to see how much of your identity is already exposed
  • Say no to services that hoard sensitive data
  • Support better alternatives that treat privacy as a baseline, not an afterthought

Because safety doesn’t come from handing over more information.

It comes from building systems that never need it in the first place.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.