Monday, June 16, 2025

Polaris of Enlightenment

Opt-in childhood

What we signed them up for before they could object.

Published 7 June 2025
– By Naomi Brockwell
6 minute read

A few weeks ago, we published an article about oversharing on social media, and how posting photos, milestones, and personal details can quietly build a digital footprint for your child that follows them for life.

But social media isn’t the only culprit.

Today, I want to talk about the devices we give our kids: the toys that talk, the tablets that teach, the monitors that watch while they sleep.

These aren’t just tools of convenience or connection. Often, they’re Trojan horses, collecting and transmitting data in ways most parents never realize.

We think we’re protecting our kids.
But in many cases, we’re signing them up for surveillance systems they can’t understand, and wouldn’t consent to if they could.

How much do you know about the toys your child is playing with?

What data are they collecting?
With whom are they sharing it?
How safely are they storing it to protect against hackers?

Take VTech, for example — a hugely popular toy company, marketed as safe, educational, and kid-friendly.

In 2015, VTech was hacked. The breach wasn’t small:

  • 6.3 million children’s profiles were exposed, along with nearly 5 million parent accounts
  • The stolen data included birthdays, home addresses, chat logs, voice recordings… even photos children had taken on their tablets

Terms no child can understand—but every parent accepts

It’s not just hackers we should be mindful of — often, these companies are allowed to do almost anything they want with the data they collect, including selling it to third parties.

When you hand your child a toy that connects to Wi-Fi or Bluetooth, you might be agreeing to terms that say:

  • Their speech can be used for targeted advertising
  • Their conversations may be retained indefinitely
  • The company can change the terms at any time, without notice

And most parents will never know.

“Safe” Devices With Open Doors

What about things like baby monitors and nanny cams?

Years ago, we did a deep dive into home cameras, and almost all popular models were built without end-to-end encryption. That means the companies that make them can access your video feed.
How much do you know about that company?
How well do you trust every employee who might be able to access that feed?

But it’s not just insiders you should worry about.
Many of these kiddy cams are notoriously easy to hack. The internet is full of real-world examples of strangers breaking into monitors, watching, and even speaking to infants.

There are even publicly available tools that scan the internet and map thousands of unsecured camera feeds, sortable by country, type, and brand.
If your monitor isn’t properly secured, it’s not just vulnerable — it’s visible.

Mozilla, through its Privacy Not Included campaign, audited dozens of smart home devices and baby monitors. They assessed whether products had basic security features like encryption, secure logins, and clear data-use policies. The verdict? Even many top-selling monitors had zero safeguards in place.

These are the products we’re told are protecting our kids.

Apps that glitch, and let you track other people’s kids

A T-Mobile child-tracking app recently glitched.
A mother refreshed the screen—expecting to see her kids’ location.
Instead, she saw a stranger’s child. Then another. Then another.

Each refresh revealed a new kid in real time.

The app was broken, but the consequences weren’t abstract.
That’s dozens of children’s locations broadcast to the wrong person.
The feature that was supposed to provide control did the opposite.

Schools are part of the problem, too

Your child’s school likely collects and stores sensitive data—without strong protections or meaningful consent.

  • In Virginia, thousands of student records were accidentally made public
  • In Seattle, a mental health survey led to deeply personal data being stored in unsecured systems

And it’s not just accidents.

A 2015 study investigated “K–12 data broker” marketplaces that trade in everything from ethnicity and affluence to personality traits and reproductive health status.
Some companies offer data on children as young as two.
Others admit they’ve sold lists of 14- and 15-year-old girls for “family planning services.”

Surveillance disguised as protection

Let’s be clear: the internet is a minefield, filled with ways children can be tracked, profiled, or preyed upon. Protecting them is more important than ever.

One category of tools that’s exploded in popularity is the parental control app—software that lets you see everything happening on your child’s device:
The messages they send. The photos they take. The websites they visit.

The intention might be good. But the execution is often disastrous.

Most of these apps are not end-to-end encrypted, meaning:

  • Faceless companies gain full access to your child’s messages, photos, and GPS
  • They operate in stealth mode, functionally indistinguishable from spyware
  • And they rarely protect that data with strong security

Again, how much do you know about these companies?
And even if you trust them, how well are they protecting this data from everyone else?

The “KidSecurity” app left 300 million records exposed, including real-time child locations and fragments of parent credit cards.
The “mSpy” app leaked private messages and movement histories in multiple breaches.

When you install one of these apps, you’re not just gaining access to your child’s world.
So is the company that built it… and everyone they fail to protect it from.

What these breaches really teach us

Here’s the takeaway from all these hacks and security failures:

Tech fails.

We don’t expect it to be perfect.
But when the stakes are this high — when we’re talking about the private lives of our children — we should be mindful of a few things:

1) Maybe companies shouldn’t be collecting so much information if they can’t properly protect it.
2) Maybe we shouldn’t be so quick to hand that information over in the first place.

When the data involves our kids, the margin for error disappears.

Your old phone might still be spying

Finally, let’s talk about hand-me-downs.

When kids get their first phone, it’s often filled with tracking, sharing, and background data collection from years of use. What you’re really passing on may be a lifetime of surveillance baked into the settings.

  • App permissions often remain intact
  • Advertising IDs stay tied to previous behavior
  • Pre-installed tracking software may still be active

The moment it connects to Wi-Fi, that “starter phone” might begin broadcasting location data and device identifiers — linked to both your past and your child’s present.

Don’t opt them in by default: 8 ways to push back

So how do we protect children in the digital age?

You don’t need to abandon technology. But you do need to understand what it’s doing, and make conscious choices about how much of your child’s life you expose.

Here are 8 tips:

1: Stop oversharing
Data brokers don’t wait for your kid to grow up. They’re already building the file.
Reconsider publicly posting their photos, location, and milestones. You’re building a permanent, searchable, biometric record of your child—without their consent.
If you want to share with friends or family, do it privately through tools like Signal stories or Ente photo sharing.

2: Avoid spyware
Sometimes the best way to protect your child is to foster a relationship of trust, and educate them about the dangers.
If monitoring is essential, use self-hosted tools. Don’t give third parties backend access to your child’s life.

3: Teach consent
Make digital consent a part of your parenting. Help your child understand their identity—and that it belongs to them.

4: Use aliases and VoIP numbers
Don’t link their real identity across platforms. Compartmentalization is protection.

5: Audit tech
Reset hand-me-down devices. Remove unnecessary apps. Disable default permissions.

6: Limit permissions
If an app asks for mic or camera access and doesn’t need it—deny it. Always audit.

7: Set boundaries with family
Ask relatives not to post about your child. You’re not overreacting—you’re defending someone who can’t yet opt in or out.

8: Ask hard questions
Ask your school how data is collected, stored, and shared. Push back on invasive platforms. Speak up when things don’t feel right.

Let Them Write Their Own Story

We’re not saying throw out your devices.
We’re saying understand what they really do.

This isn’t about fear. It’s about safety. It’s about giving your child the freedom to grow up and explore ideas without every version of themselves being permanently archived, and without being boxed in by a digital record they never chose to create.

Our job is to protect that freedom.
To give them the chance to write their own story.

Privacy is protection.
It’s autonomy.
It’s dignity.

And in a world where data compounds, links, and lives forever, every choice you make today shapes the freedom your child has tomorrow.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Tech company bankrupt – “advanced AI” was 700 Indians

Published 14 June 2025
– By Editorial Staff
“AI washing” refers to a company exaggerating or lying about their products or services being powered by advanced artificial intelligence in order to attract investors and customers.
2 minute read

An AI company that marketed itself as a technological pioneer – and attracted investments from Microsoft, among others – has gone bankrupt. In the aftermath, it has been revealed that the technology was largely based on human labor, despite promises of advanced artificial intelligence.

Builder.ai, a British startup formerly known as Engineer.ai, claimed that their AI assistant Natasha could build apps as easily as ordering pizza. But as early as 2019, the Wall Street Journal revealed that much of the coding was actually done manually by a total of about 700 programmers in India.

Despite the allegations, Builder.ai secured over $450 million in funding from investors such as Microsoft, Qatar Investment Authority, IFC, and SoftBank’s DeepCore. At its peak, the company was valued at $1.5 billion.

In May 2025, founder and CEO Sachin Dev Duggal stepped down from his position, and when the new management took over, it emerged that the revelations made in 2019 were only the tip of the iceberg. For example, the company had reported revenues of $220 million in 2024, while the actual figures were $55 million. Furthermore, the company is suspected of inflating the figures through circular transactions and fake sales via “third-party resellers”, reports the Financial Times.

Following the new revelations, lenders froze the company’s account, forcing Builder.ai into bankruptcy. The company is now accused of so-called AI washing, which means that a company exaggerates or falsely claims that its products or services are powered by advanced artificial intelligence in order to attract investors and customers.

The company’s heavy promotion of “Natasha” as a revolutionary AI solution turned out to be a facade – behind the deceptive marketing ploy lay traditional, human-driven work and financial irregularities.

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published 14 June 2025
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Swedish police urge parents to delete chat apps from children’s phones

organized crime

Published 13 June 2025
– By Editorial Staff
2 minute read

Ahead of the summer holidays, the Swedish police are warning that criminal gangs are using social media to recruit young people into crime. On Facebook, the authorities have published a list of apps that parents should keep a close eye on – or delete immediately.

Critics argue, however, that the list is arbitrary and that it is strange for the police to urge parents to delete apps that are used by Swedish authorities.

During the summer holidays, adults are often less present in young people’s everyday lives, while screen time increases. According to the police, this creates increased vulnerability. Criminal networks then try to recruit young people to handle weapons, sell drugs, or participate in serious violent crimes such as shootings and explosions.

To prevent this, a national information campaign has been launched in collaboration with the County Administrative Board. The police, together with the County Administrative Board, have compiled a list of mobile apps that they believe pose a significant risk:

  • Delete immediately: Signal, Telegram, Wickr Me
  • Keep control over: Snapchat, WhatsApp, Discord, Messenger
  • Monitor closely: TikTok, Instagram

Digital parental presence

Maja Karlsson, municipal police officer in Jönköping, also emphasizes the importance of digital parental presence:

We need to increase digital control and knowledge about which apps my child is using, who they are in contact with, and why they have downloaded different types of communication apps.

The police recommend that parents talk openly with their children about what they do online and use technical aids such as parental controls.

– There are tools available for parents who find it difficult. It’s not impossible, help is available, Karlsson continues.

Parents are also encouraged to establish fixed routines for their children and ensure they have access to meaningful summer activities.

“Complete madness”

However, the list has been met with harsh criticism from several quarters. Users point out that the Signal app is also used by the Swedish Armed Forces and question why the police list it as dangerous.

If general apps like Signal are considered dangerous, the phone app and text messaging should be first on the list”, writes another user.

Critics argue that it is not the apps themselves but how they are used that is crucial, and find it remarkable that the police are arbitrarily and without deeper justification telling parents which messaging apps are okay to use and which are not.

Complete madness to recommend uninstalling chat apps so broadly. You should know better”, comments another upset reader.

Organic Maps – the map app that doesn’t map you

Advertising partnership with Teuton Systems

Tired of Google Maps tracking you? Here's the free alternative that lets you navigate completely offline!

Published 12 June 2025
Organic Maps allow you to navigate completely offline when you have poor coverage or are hiking in the wilderness, for example.
4 minute read

In our series on open, surveillance-free apps, we take a closer look at Organic Maps – a map app that stands out as a privacy-friendly alternative to Google Maps. For many smartphone users, Google Maps has become the standard for navigation, but that convenience comes at a price: extensive collection of location data and dependence on a constant internet connection. Organic Maps is a free, open-source app (FOSS) that takes a completely different approach. Here, you can navigate without being tracked and without being tied to an internet connection.

Unlike Google Maps, which is neither open source nor particularly privacy-friendly, Organic Maps is built on open source and created by a community. The source code is openly available, which means that independent developers can review and improve the app. Most importantly, Organic Maps does not contain any tracker features – it does not collect your personal information or location data at all.

The app also has no ads or hidden data collection services running in the background. You don’t need to log in or give away any information – privacy is a core principle. Thanks to the open code, users can trust that there are no ulterior motives; it’s all about providing maps and navigation, nothing else.

Works completely offline – everywhere

One of the biggest advantages of Organic Maps is that the app works completely offline. All map data is based on the community project OpenStreetMap, which covers the entire world. You choose which maps (countries or regions) you want to download to your phone, and then you can navigate freely without the internet. Unlike Google and Apple Maps – whose offline features are very limited and lack full search or navigation functionality outside of the network – Organic Maps offers 100% of its features without a connection.

Searching for addresses and places, viewing points of interest, and turn-by-turn voice guidance work just as well offline as online. This means you can use the app in airplane mode, abroad without roaming, or far out in the wilderness.

Sample screenshots from Organic Maps: An offline map of some nature reserves, navigation in night mode, menu for downloading maps, and menu for map layers.

Since Organic Maps is based on OpenStreetMap, you also get very detailed maps. The community updates the maps continuously with everything from new bike paths to small forest trails. For example, a technology writer noted that he has yet to encounter a hiking trail that is missing from Organic Maps’ maps – often there is information that large map services miss. This makes the app particularly popular among outdoor enthusiasts, but everyone benefits: even regular roads, addresses, and points of interest are extensively covered thanks to OpenStreetMap. In short, the offline map gives you the peace of mind that the map is always available, no matter where you are.

Battery-efficient navigation

Offline navigation not only gives you freedom from the mobile network – it also saves battery power. Organic Maps is remarkably energy efficient and uses minimal power compared to many other navigation services. Without constant data traffic, background tracking, or heavy advertising, the app can focus on what it’s supposed to do and nothing more. One reviewer says he used the app during several days of hiking without having to charge his phone.

The developers themselves claim that you can go on a week-long trip on a single charge with Organic Maps as your guide. For those who travel frequently or are simply tired of GPS draining their battery, this is a game-changer. Its energy efficiency also makes Organic Maps well suited for older or simpler smartphones that may have weaker batteries – the app is lightweight and resource-efficient.

Available for Android and iPhone

Despite its different philosophy, Organic Maps is as easy to get and use as any popular app. The app is available to download for free for both Android and iOS – you can find it in the Google Play Store and Apple’s App Store. For those who use completely Google-free phones (such as GrapheneOS on Matrix mobile), it is also available through alternative open app stores such as F-Droid.

The interface is intuitive and similar to other map apps, so the barrier to switching is low. You can search for addresses or businesses, bookmark your favorite places, and get turn-by-turn voice directions. All these features are available offline after you download the maps for the area you need. In short, you get a full-featured map service on your phone – but without the surveillance.

Pre-installed on the Matrix phone

Organic Maps has become a staple in privacy-focused circles. Teuton Systems pre-installs the app on its Matrix phone – a security-focused Android smartphone based on GrapheneOS – as part of a Google-free ecosystem. This gives users a map service that respects their privacy right from the start. But even if you don’t own a Matrix mobile phone, you can still easily enjoy the benefits. Replacing Google Maps with Organic Maps on your current phone is a step towards a more privacy-secure everyday life, without losing any functionality. The app is completely free and open for everyone to try.

Organic Maps exemplifies how free and open software can give us, the average user, more control. You don’t have to worry about being tracked when you look up an address or navigate to a destination, and you can trust that the app only does what it says it does. The combination of open source code, offline capability, and top-notch privacy has earned the app excellent recommendations in tech media.

For those who value their privacy – or just want a reliable map app that works everywhere – Organic Maps is an inspiring alternative that shows it’s possible to navigate freely without giving up your privacy!

 

Features of Organic Maps

The ultimate app for travelers, tourists, hikers and cyclists:

  • Detailed offline maps with locations not found on other maps, thanks to OpenStreetMap
  • Bike paths, hiking trails and walking routes
  • Contour lines, elevation profiles, peaks and slopes
  • Turn-by-turn navigation for walking, cycling and car navigation with voice guidance, Android Auto
  • Quick offline map search
  • Export and import bookmarks in KML/KMZ format, import GPX
  • Dark mode to protect your eyes
  • Countries and regions do not take up much space
  • Free and open source

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.