Tuesday, July 1, 2025

Polaris of Enlightenment

Opt-in childhood

What we signed them up for before they could object.

Published 7 June 2025
– By Naomi Brockwell
6 minute read

A few weeks ago, we published an article about oversharing on social media, and how posting photos, milestones, and personal details can quietly build a digital footprint for your child that follows them for life.

But social media isn’t the only culprit.

Today, I want to talk about the devices we give our kids: the toys that talk, the tablets that teach, the monitors that watch while they sleep.

These aren’t just tools of convenience or connection. Often, they’re Trojan horses, collecting and transmitting data in ways most parents never realize.

We think we’re protecting our kids.
But in many cases, we’re signing them up for surveillance systems they can’t understand, and wouldn’t consent to if they could.

How much do you know about the toys your child is playing with?

What data are they collecting?
With whom are they sharing it?
How safely are they storing it to protect against hackers?

Take VTech, for example — a hugely popular toy company, marketed as safe, educational, and kid-friendly.

In 2015, VTech was hacked. The breach wasn’t small:

  • 6.3 million children’s profiles were exposed, along with nearly 5 million parent accounts
  • The stolen data included birthdays, home addresses, chat logs, voice recordings… even photos children had taken on their tablets

Terms no child can understand—but every parent accepts

It’s not just hackers we should be mindful of — often, these companies are allowed to do almost anything they want with the data they collect, including selling it to third parties.

When you hand your child a toy that connects to Wi-Fi or Bluetooth, you might be agreeing to terms that say:

  • Their speech can be used for targeted advertising
  • Their conversations may be retained indefinitely
  • The company can change the terms at any time, without notice

And most parents will never know.

“Safe” Devices With Open Doors

What about things like baby monitors and nanny cams?

Years ago, we did a deep dive into home cameras, and almost all popular models were built without end-to-end encryption. That means the companies that make them can access your video feed.
How much do you know about that company?
How well do you trust every employee who might be able to access that feed?

But it’s not just insiders you should worry about.
Many of these kiddy cams are notoriously easy to hack. The internet is full of real-world examples of strangers breaking into monitors, watching, and even speaking to infants.

There are even publicly available tools that scan the internet and map thousands of unsecured camera feeds, sortable by country, type, and brand.
If your monitor isn’t properly secured, it’s not just vulnerable — it’s visible.

Mozilla, through its Privacy Not Included campaign, audited dozens of smart home devices and baby monitors. They assessed whether products had basic security features like encryption, secure logins, and clear data-use policies. The verdict? Even many top-selling monitors had zero safeguards in place.

These are the products we’re told are protecting our kids.

Apps that glitch, and let you track other people’s kids

A T-Mobile child-tracking app recently glitched.
A mother refreshed the screen—expecting to see her kids’ location.
Instead, she saw a stranger’s child. Then another. Then another.

Each refresh revealed a new kid in real time.

The app was broken, but the consequences weren’t abstract.
That’s dozens of children’s locations broadcast to the wrong person.
The feature that was supposed to provide control did the opposite.

Schools are part of the problem, too

Your child’s school likely collects and stores sensitive data—without strong protections or meaningful consent.

  • In Virginia, thousands of student records were accidentally made public
  • In Seattle, a mental health survey led to deeply personal data being stored in unsecured systems

And it’s not just accidents.

A 2015 study investigated “K–12 data broker” marketplaces that trade in everything from ethnicity and affluence to personality traits and reproductive health status.
Some companies offer data on children as young as two.
Others admit they’ve sold lists of 14- and 15-year-old girls for “family planning services.”

Surveillance disguised as protection

Let’s be clear: the internet is a minefield, filled with ways children can be tracked, profiled, or preyed upon. Protecting them is more important than ever.

One category of tools that’s exploded in popularity is the parental control app—software that lets you see everything happening on your child’s device:
The messages they send. The photos they take. The websites they visit.

The intention might be good. But the execution is often disastrous.

Most of these apps are not end-to-end encrypted, meaning:

  • Faceless companies gain full access to your child’s messages, photos, and GPS
  • They operate in stealth mode, functionally indistinguishable from spyware
  • And they rarely protect that data with strong security

Again, how much do you know about these companies?
And even if you trust them, how well are they protecting this data from everyone else?

The “KidSecurity” app left 300 million records exposed, including real-time child locations and fragments of parent credit cards.
The “mSpy” app leaked private messages and movement histories in multiple breaches.

When you install one of these apps, you’re not just gaining access to your child’s world.
So is the company that built it… and everyone they fail to protect it from.

What these breaches really teach us

Here’s the takeaway from all these hacks and security failures:

Tech fails.

We don’t expect it to be perfect.
But when the stakes are this high — when we’re talking about the private lives of our children — we should be mindful of a few things:

1) Maybe companies shouldn’t be collecting so much information if they can’t properly protect it.
2) Maybe we shouldn’t be so quick to hand that information over in the first place.

When the data involves our kids, the margin for error disappears.

Your old phone might still be spying

Finally, let’s talk about hand-me-downs.

When kids get their first phone, it’s often filled with tracking, sharing, and background data collection from years of use. What you’re really passing on may be a lifetime of surveillance baked into the settings.

  • App permissions often remain intact
  • Advertising IDs stay tied to previous behavior
  • Pre-installed tracking software may still be active

The moment it connects to Wi-Fi, that “starter phone” might begin broadcasting location data and device identifiers — linked to both your past and your child’s present.

Don’t opt them in by default: 8 ways to push back

So how do we protect children in the digital age?

You don’t need to abandon technology. But you do need to understand what it’s doing, and make conscious choices about how much of your child’s life you expose.

Here are 8 tips:

1: Stop oversharing
Data brokers don’t wait for your kid to grow up. They’re already building the file.
Reconsider publicly posting their photos, location, and milestones. You’re building a permanent, searchable, biometric record of your child—without their consent.
If you want to share with friends or family, do it privately through tools like Signal stories or Ente photo sharing.

2: Avoid spyware
Sometimes the best way to protect your child is to foster a relationship of trust, and educate them about the dangers.
If monitoring is essential, use self-hosted tools. Don’t give third parties backend access to your child’s life.

3: Teach consent
Make digital consent a part of your parenting. Help your child understand their identity—and that it belongs to them.

4: Use aliases and VoIP numbers
Don’t link their real identity across platforms. Compartmentalization is protection.

5: Audit tech
Reset hand-me-down devices. Remove unnecessary apps. Disable default permissions.

6: Limit permissions
If an app asks for mic or camera access and doesn’t need it—deny it. Always audit.

7: Set boundaries with family
Ask relatives not to post about your child. You’re not overreacting—you’re defending someone who can’t yet opt in or out.

8: Ask hard questions
Ask your school how data is collected, stored, and shared. Push back on invasive platforms. Speak up when things don’t feel right.

Let Them Write Their Own Story

We’re not saying throw out your devices.
We’re saying understand what they really do.

This isn’t about fear. It’s about safety. It’s about giving your child the freedom to grow up and explore ideas without every version of themselves being permanently archived, and without being boxed in by a digital record they never chose to create.

Our job is to protect that freedom.
To give them the chance to write their own story.

Privacy is protection.
It’s autonomy.
It’s dignity.

And in a world where data compounds, links, and lives forever, every choice you make today shapes the freedom your child has tomorrow.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Swedish regional healthcare app run by chatbot makes serious errors

Published yesterday 10:19
– By Editorial Staff
In one documented case, the app classified an elderly man's symptoms as mild - he died the following day.
2 minute read

An AI-based healthcare app used by the Gävleborg Regional Healthcare Authority in Sweden is now under scrutiny following serious assessment errors. In one notable case, an elderly man’s condition was classified as mild – he died the following day.

Healthcare staff are raising alarms about deficiencies deemed to threaten patient safety, and the app is internally described as a “disaster”.

Min vård Gävleborg (My Healthcare Gävleborg) is used when residents seek digital healthcare or call 1177 (Sweden’s national healthcare advice line). A chatbot asks questions to make an initial medical assessment and then refers the patient to an appropriate level of care. However, according to several doctors in the region, the system is not functioning safely enough.

In one documented case, the app classified an elderly man’s symptoms as mild. He died the following day. An incident report shows that the prioritization was incorrect, although it couldn’t be established that this directly caused the death.

In another case, an inmate at the Gävle Correctional Facility sought care for breathing difficulties – but was referred to a chat with a doctor in Ljusdal, instead of being sent to the emergency room.

– She should obviously have been sent to the emergency room, says Elisabeth Månsson Rydén, a doctor in Ljusdal and board member of the Swedish Association of General Medicine in Gävleborg, speaking to the tax-funded SVT.

“Completely insane”

Criticism from healthcare staff is extensive. Several doctors warn that the app underestimates serious symptoms, which could have life-threatening consequences. Meanwhile, there are examples of the opposite – where patients are given too high priority – which risks unnecessarily burdening healthcare services and causing delays for severely ill patients.

– Doctors have expressed in our meetings that Min vård Gävleborg is a disaster. This is completely insane, says Månsson Rydén.

Despite the death incident, Region Gävleborg has chosen not to report the event to either the Health and Social Care Inspectorate (IVO) or the Swedish Medical Products Agency.

– We looked at the case and decided it didn’t need to be reported, says Chief Medical Officer Agneta Larsson.

Other regions have reacted

The app was developed by Platform24, a Swedish company whose digital systems are used in several regions. In Västra Götaland Region, the app was paused after a report showed that three out of ten patients were assessed incorrectly. In Region Östergötland, similar deficiencies have led to a report to the Swedish Medical Products Agency. An investigation is ongoing.

Despite this, Agneta Larsson defends the version used in Gävleborg:

– We have reviewed our own system, and we cannot see these errors.

Platform24 has declined to be interviewed, but in a written response to Swedish Television, the company’s Medical Director Stina Perdahl defends the app’s basic principles.

“For patient safety reasons, the assessment is deliberately designed to be a bit more cautious initially”, it is claimed.

Your TV is spying on you

Your TV is taking snapshots of everything you watch.

Published 28 June 2025
– By Naomi Brockwell
6 minute read

You sit down to relax, put on your favorite show, and settle in for a night of binge-watching. But while you’re watching your TV… your TV is watching you.

Smart TVs take constant snapshots of everything you watch. Sometimes hundreds of snapshots a second.

Welcome to the future of “entertainment”.

What’s actually happening behind the screens?

Smart TVs are just modern TVs. It’s almost impossible to buy a non-smart TV anymore. And they’re basically just oversized internet-connected computers. They come preloaded with apps like Amazon Prime Video, YouTube, and Hulu.

They also come preloaded with surveillance.

recent study from UC Davis researchers tested TVs from Samsung and LG, two of the biggest players in the market, and came across something known as ACR: Automatic Content Recognition.

What is ACR and why should you care?

ACR is a surveillance technology built into the operating systems of smart TVs. This system takes continuous snapshots of whatever is playing to identify exactly what is on the screen.

LG’s privacy policy states they take a snapshot every 10 milliseconds. That’s 100 per second.
Samsung does it every 500 milliseconds.

From these snapshots, the TV generates a content fingerprint and sends it to the manufacturer. That fingerprint is then matched against a massive database to figure out exactly what you’re watching.

Let that sink in. Your television is taking snapshots of everything you’re watching.

And it doesn’t just apply to shows you’re watching on the TV. Even if you plug in your laptop and use the TV as a dumb monitor, it’s still taking snapshots.

  • Zoom calls
  • Emails
  • Banking apps
  • Personal photos

Audio or video snapshots, or sometimes both, are being collected of all of it.

Currently, the way ACR works, the snapshots themselves are not necessarily sent off-device, but your TV is still collecting them. And we all know that AI is getting better and better. It’s now possible for AI to identify everything in a video or photo: faces, emotions, background details.

As the technology continues to improve, we should presume that TVs will move from fingerprint-based ACR to automatic AI-driven content recognition.

As Toby Lewis from Darktrace told The Guardian:

“Facial recognition, speech-to-text, content analysis—these can all be used together to build an in-depth picture of an individual user”.

This is where we’re headed.

This data doesn’t exist in a vacuum

TV manufacturers don’t just sit on this data. They monetize it.

Viewing habits are combined with data from your other devices: phones, tablets, smart fridges, wearables. Then it’s sold to third parties. Advertisers. Data brokers. Political campaigns.

One study found that almost every TV they tested contacted Netflix servers, even when no Netflix account was configured.

So who’s getting your data?

We don’t know. That’s the point.

How your data gets weaponized

Let’s say your TV learns that:

  • You watch sports every Sunday
  • You binge true crime on weekdays
  • You play YouTube fashion hauls in the afternoons

These habits are then tied to a profile of your IP address, email, and household.

Now imagine that profile combined with:

  • Your Amazon purchase history
  • Your travel patterns
  • Your social media behavior
  • Your voting record

That’s the real goal: total psychological profiling. Knowing not just what you do, but what you’re likely to do. What you’ll buy, how you’ll vote, who you’ll trust.

In other words, your smart TV isn’t just spying.

It’s helping others manipulate you.

Why didn’t I hear about this when I set up my TV?

Because they don’t want you to know.

When TV manufacturers first started doing this, they never informed users. The practice slipped quietly by.

A 2017 FTC lawsuit revealed that Vizio was collecting viewing data from 11 million TVs and selling it without ever getting user consent.

These days, companies technically include “disclosures” in their Terms of Service. But they’re buried under vague names like:

  • “Viewing Information Services”
  • “Live Plus”
  • “Personalized Experiences”

Have you ever actually read those menus? Didn’t think so.

These aren’t written to inform you. They’re written to shield corporations from lawsuits.

If users actually understood what was happening, many would opt out entirely. But the system is designed to confuse and hide from you the truth that surveillance devices entered our living rooms and bedrooms without us realizing.

Researchers are being silenced

Not only are these systems intentionally opaque and confusing, companies design them to discourage scrutiny.

And when researchers try to investigate these systems, they hit two major roadblocks:

  1. Technical – Jailbreaking modern Smart TVs is nearly impossible. Their systems are locked down, and the code is proprietary.
  2. Legal – Researchers who attempt to reverse-engineer modern-day tech risk being sued under the Computer Fraud and Abuse Act (CFAA), a vague and outdated law that doesn’t distinguish between malicious actors and researchers trying to inform the public.

As a result, most of what we know about these TVs comes from inference. Guessing what’s happening by watching network traffic, since direct access is often blocked.

That means most of this surveillance happens in the dark. Unchallenged, unverified, and largely unnoticed.

We need stronger protections for privacy researchers, clearer disclosures for users, and real pressure on companies to stop hiding behind complexity.

Because if we can’t see what the tech is doing, we can’t choose to opt out.

What you can do

Here are the most effective steps you can take to protect your privacy:

1. Don’t connect your TV to the internet.
If you keep the Wi-Fi off, the TV can’t send data to manufacturers or advertisers. Use a laptop or trusted device for streaming instead. If the TV stays offline forever, the data it collects never leaves the device.

2. Turn off ACR settings.
Dig through the menus and disable everything related to viewing info, advertising, and personalization. Look for settings like “Live Plus” or “Viewing Information Services.” Be thorough. These options are often buried.

3. Use dumb displays.
It’s almost impossible to buy a non-smart TV today. The market is flooded with “smart” everything. But a few dumb projectors still exist. And some monitors are safer too, though they don’t go to TV sizes yet.

4. Be vocal.
Ask hard questions when buying devices. Demand that manufacturers disclose how they use your data. Let them know that privacy matters to you.

5. Push for CFAA reform.
The CFAA is being weaponized to silence researchers who try to expose surveillance. If we want to understand how our tech works, researchers must be protected, not punished. We need to fight back against these chilling effects and support organizations doing this work.

The Ludlow Institute is now funding researchers who reverse-engineer surveillance tech. If you’re a researcher, or want to support one, get in touch.

This is just one piece of the puzzle

Smart TVs are among the most aggressive tracking devices in your home. But they’re not alone. Nearly every “smart” device has the same capabilities to build a profile on you: phones, thermostats, lightbulbs, doorbells, fridges.

This surveillance has been normalized. But it’s not normal.

We shouldn’t have let faceless corporations and governments into our bedrooms and living rooms. But now that they’re here, we have to push back.

That starts with awareness. Then it’s up to us to make better choices and help others do the same.

Let’s take back our homes.
Let’s stop normalizing surveillance.

Because privacy isn’t extreme.
It’s common sense.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Tech giants’ executives become US military officers – gain power over future warfare

The future of AI

Published 26 June 2025
– By Editorial Staff
Data from platforms such as Facebook, Instagram, and WhatsApp could soon be linked to the Swedish military's surveillance systems, according to TBOT (the Swedish Armed Forces' Cyber Defense Unit).
3 minute read

Four senior executives from tech giants Meta, Palantir, and OpenAI have recently been sworn into the US Army Reserve with the rank of lieutenant colonel – an officer rank that normally requires over 20 years of active military service.

The group is part of a new initiative called Detachment 201, aimed at transforming the American military by integrating advanced technologies such as drones, robotics, augmented reality (AR), and AI support.

The new recruits are:

Shyam Sankar, Chief Technology Officer (CTO) of Palantir

Andrew Bosworth, Chief Technology Officer of Meta

Kevin Weil, Chief Product Officer (CPO) of OpenAI

Bob McGrew, former Research Director at OpenAI

According to the technology platform Take Back Our Tech (TBOT), which monitors these developments, these are not symbolic appointments.

“These aren’t random picks. They’re intentional and bring representation and collaboration from the highest level of these companies”, writes founder Hakeem Anwar.

Meta and Palantir on the battlefield

Although the newly appointed officers must formally undergo physical training and weapons instruction, they are expected to participate primarily in digital defense. Their mission is to help the army adapt to a new form of warfare where technology takes center stage.

“The battlefield is truly transforming and so is the government”, notes Anwar.

According to Anwar, the recruitment of Palantir’s CTO could mean the military will start using the company’s Gotham platform as standard. Gotham is a digital interface that collects intelligence and monitors targets through satellite imagery and video feeds.

Meta’s CTO is expected to contribute to integrating data from platforms like Facebook, Instagram, and WhatsApp, which according to TBOT could be connected to military surveillance systems. These platforms are used by billions of people worldwide and contain vast amounts of movement, communication, and behavioral data.

“The activities, movements, and communications from these apps could be integrated into this surveillance network”, writes Anwar, adding:

“It’s no wonder why countries opposed to the US like China have been banning Meta products”.

Leaked project reveals AI initiative for entire government apparatus

Regarding OpenAI’s role, Anwar suggests that Kevin Weil and Bob McGrew might design an AI interface for the army, where soldiers would have access to AI chatbots to support strategy and field tactics.

As Detachment 201 becomes public, a separate AI initiative within the US government has leaked. The website ai.gov, still under development, reveals a plan to equip the entire federal administration with AI tools – from code assistants to AI chatbots for internal use.

TBOT notes that the initiative relies on AI models from OpenAI, Google, and Anthropic. The project is led by the General Services Administration, under former Tesla engineer Thomas Shedd, who has also been involved in the cryptocurrency project DOGE.

“The irony? The website itself was leaked during development, demonstrating that AI isn’t foolproof and can’t replace human expertise”, comments Anwar.

According to the tech site’s founder, several federal employees are critical of the initiative, concerned about insufficient safeguards.

“Without proper safeguards, diving head first into AI could create new security vulnerabilities, disrupt operations, and further erode privacy”, he writes.

Deepfakes are getting scary good

Why your next “urgent” call or ad might be an AI scam.

Published 22 June 2025
– By Naomi Brockwell
4 minute read

This week I watched a compilation of video clips that looked absolutely real. Police officers, bank managers, disaster relief workers, product endorsements… but every single one was generated by AI. None of the people, voices, or backdrops ever existed.

It’s fascinating… and chilling. Because the potential for misuse is growing fast, and most people aren’t ready.

This same technology already works in real time. Someone can join a Zoom call, flip a switch, and suddenly look and sound like your boss, your spouse, or your favorite celebrity. That opens the door to a new generation of scams, and people everywhere are falling for them.

The old scripts, supercharged by AI

“Ma’am, I’m seeing multiple threats on your computer. I need remote access right now to secure your files”.
Tech-support scams used to rely on a shaky phone line and a thick accent. Now an AI voice clone mimics a calm AppleCare rep, shares a fake malware alert, and convinces you to install remote-control software. One click later, they’re digging through your files and draining your bank account.

“We’ve detected suspicious activity on your account. Please verify your login”.
Phishing emails are old news. But now people are getting FaceTime calls that look like their bank manager. The cloned face reads off the last four digits of your card, then asks you to confirm the rest. That’s all they need.

“Miracle hearing aids are only $14.99 today. Click to order”.
Fake doctors in lab coats (generated by AI) are popping up in ads, selling junk gadgets. The product either never arrives, or the site skims your card info.

“We just need your Medicare number to update your benefits for this year.”
Seniors are being targeted with robocalls that splice in their grandchild’s real voice. Once the scammer gets your Medicare ID, they start billing for fake procedures that mess up your records.

“Congratulations, you’ve won $1,000,000! Just pay the small claiming fee today”.
Add a fake newscaster to an old lottery scam, and suddenly it feels real. Victims rush to “claim their prize” and wire the fee… straight to a fraudster.

“We’re raising funds for a sick parishioner—can you grab some Apple gift cards?”
Community members are seeing AI-generated videos of their own pastor asking for help. Once the card numbers are sent, they’re gone.

“Can you believe these concert tickets are so cheap?”
AI-generated influencers post about crazy ticket deals. Victims buy, receive a QR code, and show up at the venue, only to find the code has already been used.

“Help our disaster-relief effort.”
Hours after a real hurricane or earthquake, fake charity appeals start circulating. The links look urgent and heartfelt, and route donations to crypto wallets that vanish.

Why we fall for it and what to watch out for

High pressure
Every scammer plays the same four notes: fear, urgency, greed, and empathy. They hit you with a problem that feels like an emergency, offer a reward too good to miss, or ask for help in a moment of vulnerability. These scams only work if you rush. That’s their weak spot. If something feels high-pressure, pause. Don’t make decisions in panic. You can always ask someone you trust for advice.

Borrowed credibility
Deepfakes hijack your instincts. When someone looks and sounds like your boss, your parent, or a celebrity, your brain wants to trust them. But just because you recognize the face doesn’t mean it’s real. Don’t assume a familiar voice or face is proof of identity. Synthetic media can be convincing enough to fool even close friends.

Trusted platforms become delivery trucks
We tend to relax when something comes through a trusted source — like a Zoom call, a blue-check account, or an ad on a mainstream site. But scammers exploit that trust. Just because something shows up on a legitimate platform doesn’t mean it’s safe. The platform’s credibility rubs off on the fake.

Deepfakes aren’t just a technology problem, they’re a human one. For most of history, our eyes and ears were reliable lie detectors. Now, that shortcut is broken. And until our instincts catch up, skepticism is your best defense.

How to stay one step ahead

  1. Slow the game down.
    Scammers rely on speed. Hang up, close the tab, take a breath. If it’s real, it’ll still be there in five minutes.
  2. Verify on a second channel.
    If your “bank” or “boss” calls, reach out using a number or app you already trust. Don’t rely on the contact info they provide.
  3. Lock down big moves.
    Use two-factor authentication, passphrases, or code words for any important accounts or transactions.
  4. Educate your circle.
    Most deepfake losses happen because someone else panicked. Talk to your family, especially seniors. Share this newsletter. Report fake ads. Keep each other sharp.

Many of these scams fall apart the moment you stop and think. The goal is always the same: get you to act fast. But you don’t have to play along.

Stay calm. Stay sharp. Stay skeptical.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.