Saturday, August 2, 2025

Polaris of Enlightenment

What I wish I knew about privacy sooner

The hard truths no one warned me about.

Published 22 March 2025
– By Naomi Brockwell
5 minute read

I’ve been deep in the privacy world for years, but I wasn’t always this way. If I could go back, I’d grab my younger self by the shoulders and say: “Wake up. The internet is a battlefield of people fighting for your attention, and many of them definitely don’t have your best interests at heart”.

I used to think I was making my own decisions—choosing what platforms to try, what videos to watch, what to believe. I didn’t realize I was part of a system designed to shape my behavior. Some just wanted to sell me things I didn’t need—or even things that actively harm me. But more importantly, some were paying to influence my thoughts, my votes, and even who I saw as the enemy.

There is a lot at stake when we lose the ability to make choices free from manipulation. When our digital exhaust—every click, every pause, every hesitation—is mined and fed into psychological experiments designed to drive behavior, our ability to think independently is undermined.

No one warned me about this. But it’s not too late—not for you. Here are the lessons I wish I had learned sooner—and the steps you can take now, before you wish you had.

1. Privacy mistakes compound over time—like a credit score, but worse

Your digital history doesn’t reset—once data is out there, it’s nearly impossible to erase.

The hard truth:

  • Companies connect everything—your new email, phone number, or payment method can be linked back to your old identity through data brokers, loyalty programs, and behavioral analysis.
  • Switching to a new device or platform doesn’t give you a blank slate—it just gives companies another data point to connect.

What to do:

  • Break the chain before it forms. Use burner emails, aliases, and virtual phone numbers.
  • Change multiple things at once. A new email won’t help if you keep the same phone number and credit card.
  • Be proactive, not reactive. Once a profile is built, you can’t undo it—so prevent unnecessary links before they happen.

2. You’re being tracked—even when you’re not using the internet

Most people assume tracking only happens when they’re browsing, posting, or shopping—but some of the most invasive tracking happens when you’re idle. Even when you think you’re being careful, your devices continue leaking data, and websites have ways to track you that go beyond cookies.

The hard truth:

  • Your phone constantly pings cell towers, creating a movement map of your location—even if you’re not using any apps.
  • Smart devices send data home at all hours, quietly updating manufacturers without your consent.
  • Websites fingerprint you the moment you visit, using unique device characteristics to track you, even if you clear cookies or use a VPN.
  • Your laptop and phone make hidden network requests, syncing background data you never approved.
  • Even privacy tools like incognito mode or VPNs don’t fully protect you. Websites use behavioral tracking to identify you based on how you type, scroll, or even the tilt of your phone.
  • Battery percentage, Bluetooth connections, and light sensor data can be used to re-identify you after switching networks.

What to do:

  • Use a privacy-focused browser like Mullvad Browser or Brave Browser.
  • Check how unique your device fingerprint is at coveryourtracks.eff.org.
  • Monitor hidden data leaks with a reverse firewall like Little Snitch (for Mac)—you’ll be shocked at how much data leaves your devices when you’re not using them.
  • Use a VPN like Mullvad to prevent network-level tracking, but don’t rely on it alone.
  • Break behavioral tracking patterns by changing your scrolling, typing, and browsing habits.

3. Your deleted data isn’t deleted—it’s just hidden from you

Deleting a file, message, or account doesn’t mean it’s gone.

The hard truth:

  • Most services just remove your access to data, not the data itself.
  • Even if you delete an email from Gmail, Google has already analyzed its contents and added what it learned to your profile.
  • Companies don’t just store data—they train AI models on it. Even if deletion were possible, what they’ve learned can’t be undone.

What to do:

  • Use services that don’t collect your data in the first place. Try ProtonMail instead of Gmail, or Brave instead of Google Search.
  • Assume that if a company has your data, it may never be deleted—so don’t hand it over in the first place.

4. The biggest privacy mistake: Thinking privacy isn’t important because “I have nothing to hide”

Privacy isn’t about hiding—it’s about control over your own data, your own life, and your own future.

The hard truth:

  • Data collectors don’t care who you are—they collect everything. If laws change, or you become notable, your past is already logged and available to be used against you.
  • “I have nothing to hide” becomes “I wish I had hidden that.” Your past purchases, social media comments, or medical data could one day be used against you.
  • Just because you don’t feel the urgency of privacy now doesn’t mean you shouldn’t be choosing privacy-focused products. Every choice you make funds a future—you’re either supporting companies that protect people or ones that normalize surveillance. Which future are you contributing to?
  • Anonymity only works if there’s a crowd. The more people use privacy tools, the safer we all become. Even if your own safety doesn’t feel like a concern right now, your choices help protect the most vulnerable members of society by strengthening the privacy ecosystem.

What to do:

  • Support privacy-friendly companies.
  • Normalize privacy tools in your circles. The more people use them, the less suspicious they seem.
  • Act now, not when it’s too late. Privacy matters before you need it.

5. You’re never just a customer—you’re a product

Free services don’t serve you—they serve the people who pay for your data.

The hard truth:

  • When I first signed up for Gmail, I thought I was getting a free email account. In reality, I was handing over my private conversations for them to scan, profile, and sell.
  • Even paid services can sell your data. Many “premium” apps still track and monetize your activity.
  • AI assistants and smart devices extract data from you. Be intentional about the data you give them, knowing they are mining your information.

What to do:

  • Ask: “Who profits from my data?”
  • Use privacy-respecting alternatives.
  • Think twice before using free AI assistants that explicitly collect your data, or speaking near smart devices.

Final thoughts: The future isn’t written yet

Knowing what I know now, I’d tell my younger self this: you are not powerless. The tools you use, the services you fund, and the choices you make shape the world we all live in.

Take your first step toward reclaiming your privacy today. Because every action counts, and the future isn’t written yet.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Spilling the Tea: KYC Is a liability, not a safety feature

Published today 8:12
– By Naomi Brockwell
5 minute read

This week, a devastating breach exposed tens of thousands of users of Tea, a dating safety app that asked women to verify their identity with selfies, government IDs, and location data.

Over 72,000 images were found in a publicly accessible Firebase database. No authentication required. 4chan users discovered the open bucket and immediately began downloading and sharing the contents: face scans, driver’s licenses, and private messages. Some users have already used the leaked IP addresses to build and circulate maps that attempt to track and trace the women in those files.

Tea confirmed the breach, claiming the data came from a legacy system. But that doesn’t change the core issue:
This data never should have been collected in the first place.

What’s marketed as safety often doubles as surveillance

Tea is just one example of a broader trend: platforms claiming to protect you while quietly collecting as much data as possible. “Verification” is marketed as a security feature, something you do for your own good. The app was pitched as a tool to help women vet potential dates, avoid abuse, and stay safe. But in practice, access required handing over deeply personal data. Face scans, government-issued IDs, and real-time location information became the price of entry.

This is how surveillance becomes palatable. The language of “just for verification” hides the reality. Users are given no transparency about where their data is stored, how long it is kept, or who can access it. These aren’t neutral design choices. They are calculated decisions that prioritize corporate protection, not user safety.

We need to talk about KYC

What happened with Tea reflects a much bigger issue. Identification is quietly becoming the default requirement for access to the internet. No ID? No entry. No selfie? No account. This is how KYC culture has expanded, moving far beyond finance into social platforms, community forums, and dating apps.

We’ve been taught to believe that identity verification equals safety. But time and again, that promise falls apart. Centralized databases get breached, IP addresses are logged and weaponized, and photos meant for internal review end up archived on the dark web.

If we want a safer internet, we need to stop equating surveillance with security. The real path to safety is minimizing what gets collected in the first place. That means embracing pseudonyms, decentralizing data, and building systems that do not rely on a single gatekeeper to decide who gets to participate.

“Your data will be deleted”. Yeah right.

Tea’s privacy policy stated in black and white:

Selfies and government ID images “will be deleted immediately following the completion of the verification process”.

Yet here we are. Over 72,000 images are now circulating online, scraped from an open Firebase bucket. That’s a direct contradiction of what users were told. And it’s not an isolated incident.

This kind of betrayal is becoming disturbingly common. Companies collect high-risk personal data and reassure users with vague promises:

“We only keep it temporarily”.
“We delete it right after verification”.
“It’s stored securely”.

These phrases are repeated often, to make us feel better about handing over our most private information. But there’s rarely any oversight, and almost never any enforcement.

At TSA checkpoints in the U.S., travelers are now being asked to scan their faces. The official line? The images are immediately deleted. But again, how do we know? Who verifies that? The public isn’t given access to the systems handling those scans. There’s no independent audit, no transparency, and we’re asked to trust blindly.

The truth is, we usually don’t know where our data goes. “Just for verification” has become an excuse for massive data collection. And even if a company intends to delete your data, it still exists long enough to be copied, leaked, or stolen.

Temporary storage is still storage.

This breach shows how fragile those assurances really are. Tea said the right things on paper, but in practice, their database was completely unprotected. That’s the reality behind most “privacy policies”: vague assurances, no independent oversight, and no consequences when those promises are broken.

KYC pipelines are a perfect storm of risk. They collect extremely sensitive data. They normalize giving it away. And they operate behind a curtain of unverifiable claims.

It’s time to stop accepting “don’t worry, it’s deleted” as a substitute for actual security. If your platform requires storing sensitive personal data, that data becomes a liability the moment it is collected.

The safest database is the one that never existed.

A delicate cultural moment

This story has touched a nerve. Tea was already controversial, with critics arguing it enabled anonymous accusations and blurred the line between caution and public shaming. Some see the breach as ironic, even deserved.

But that is not the lesson we should take from this.

The breach revealed how easily identity exposure has become normalized, how vulnerable we all are when ID verification is treated as the default, and how quickly sensitive data becomes ammunition once it slips out of the hands of those who collected it.

It’s a reminder that we are all vulnerable in a world that demands ID verification just to participate in daily life.

This isn’t just about one app’s failure. It’s a reflection of the dangerous norms we’ve accepted.

Takeaways

  • KYC is a liability, not a security measure. The more personal data a platform holds, the more dangerous a breach becomes.
  • Normalizing ID collection puts people at risk. The existence of a database is always a risk, no matter how noble the intent.
  • We can support victims of surveillance without endorsing every platform they use. Privacy isn’t conditional on whether we like someone or not.
  • It’s time to build tools that don’t require identity. True safety comes from architectures that protect by design.

Let this be a wake-up call. Not just for the companies building these tools, but for all of us using them. Think twice before handing over your ID or revealing your IP address to a platform you use.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Zuckerberg: Skipping AI glasses puts you at a “cognitive disadvantage”

The future of AI

Published yesterday 13:41
– By Editorial Staff
"The ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you", believes the Meta CEO.
2 minute read

Meta CEO Mark Zuckerberg warns that people without AI glasses will find themselves at a significant mental “disadvantage” in the future. During the company’s quarterly report, he shared his vision of glasses as the primary way to interact with artificial intelligence.

On Thursday, Meta released its quarterly report. In a call directed at investors, CEO Mark Zuckerberg spoke about the company’s investment in smart glasses and warned about the consequences of staying outside this development, reports TechCrunch.

I continue to think that glasses are basically going to be the ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you, Zuckerberg said during the investor call.

By adding screens, even more value can be unlocked, he argued, whether it involves holographic fields of vision or smaller displays in everyday AI glasses.

I think in the future, if you don’t have glasses that have AI – or some way to interact with AI – I think you’re … probably going to be at a pretty significant cognitive disadvantage compared to other people, he added.

Unexpected success

Meta has focused on “smart” glasses like the Ray-Ban Meta and Oakley Meta models. The glasses allow users to listen to music, take photos and ask questions to Meta AI. The products have become a surprising success – revenue from Ray-Ban Meta glasses more than tripled compared to the previous year.

However, the Reality Labs division has been costly. Meta reported $4,53 billion in operating losses for the second quarter, and since 2020, the unit has lost nearly $70 billion.

Competition is growing. OpenAI acquired Jony Ive’s startup company this spring for $6.5 billion to develop AI devices, while other companies are exploring AI brooches and pendants.

However, Zuckerberg is convinced about the future of glasses and connects them to the Metaverse vision.

The other thing that’s awesome about glasses is they are going to be the ideal way to blend the physical and digital worlds together, he concluded.

Meta has previously been known for contributing to the increasing surveillance society and has also ignored health aspects regarding radiation from wireless technology.

Samsung and Tesla sign billion-dollar deal for AI chip manufacturing

The future of AI

Published 31 July 2025
– By Editorial Staff
Image of the construction of Samsung's large chip factory in Taylor, located in Texas, USA.
2 minute read

South Korean tech giant Samsung has entered into a comprehensive agreement with Tesla to manufacture next-generation AI chips. The contract, which extends until 2033, is worth $16.5 billion and means Samsung will dedicate its new Texas-based factory to producing Tesla’s AI6 chips.

Samsung receives a significant boost for its semiconductor manufacturing through the new partnership with Tesla. The electric vehicle manufacturer has chosen to place production of its advanced AI6 chips at Samsung’s facility in Texas, in a move that could change competitive dynamics within the semiconductor industry, writes TechCrunch.

The strategic importance of this is hard to overstate, wrote Tesla founder Elon Musk on X when the deal was announced.

The agreement represents an important milestone for Samsung, which has previously struggled to attract and retain major customers for its chip manufacturing. According to Musk, Tesla may end up spending significantly more than the original $16.5 billion on Samsung chips.

Actual output is likely to be several times higher, he explained in a later post.

Tesla’s chip strategy takes shape

The AI6 chips form the core of Tesla’s ambition to evolve from car manufacturer to an AI and robotics company. The new generation chip is designed as an all-around solution that can be used both for the company’s Full Self-Driving system and for the humanoid robots of the Optimus model that Tesla is developing, as well as for high-performance AI training in data centers.

Tesla is working in parallel with Taiwanese chip manufacturer TSMC for production of AI5 chips, whose design was recently completed. These will initially be manufactured at TSMC’s facility in Taiwan and later also in Arizona. Samsung already produces Tesla’s AI4 chips.

Since 2019, Tesla has developed its own custom chips after leaving Nvidia’s Drive platform. The first self-developed chipset, known as FSD Computer or Hardware 3, was launched the same year and installed in all of the company’s electric vehicles.

Musk promises personal involvement

In an unusual turn, Samsung has agreed to let Tesla assist in maximizing manufacturing efficiency at the Texas factory. Musk has promised personal presence to accelerate progress.

This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house, he wrote.

The strategic partnership could give Samsung the stable customer volume the company needs to compete with industry leader TSMC, while Tesla secures access to advanced chip manufacturing for its growing AI ambitions.

Women’s app hacked – thousands of private images leaked

Published 29 July 2025
– By Editorial Staff
1 minute read

An app that helps women identify problematic men became a target for hackers. Over 70,000 images, including selfies and driver’s licenses, were leaked to 4chan.

The dating app Tea, which allows women to warn each other about “red flags” in men, suffered a major data breach last week. According to 404 Media, hackers from the 4chan forum managed to access 72,000 images from the app’s database, of which 13,000 were selfies and driver’s license photos.

The app was created by software developer Sean Cook, inspired by his mother’s “terrifying” dating experiences. Tea has over four million active users and topped Apple’s App Store last week.

Careless data handling

The company stored sensitive user data on Google’s cloud service Firebase, where the information became accessible to unauthorized parties. Several cybersecurity experts have criticized the company’s methods as “careless”.

— A company should never host users’ private data on a publicly accessible server, says Grant Ho, professor at the University of Chicago, to The Verge.

Andrew Guthrie Ferguson, law professor at George Washington University, warns that digital “whisper networks” lose control over sensitive information.

— What changes when it’s digital and recoverable and save-able and searchable is you lose control over it, he says.

Tea has launched an investigation together with external cybersecurity companies.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.