Sunday, August 3, 2025

Polaris of Enlightenment

Why I’m a techno-optimist

Reclaiming privacy in a world that wants us to give up.

Published 15 January 2025
– By Naomi Brockwell
6 minute read

It feels like every device in our lives is spying on us. Vacuum cleaners send photos and audio from our bedrooms to China. Televisions take screenshots of what we’re watching every few seconds and share that data with third parties. Social media algorithms analyze our every click and scroll. And governments leverage these tools to watch us more closely than ever before.

It’s easy to feel pessimistic—even hopeless—about the future of privacy in a world so intertwined with technology. If you only watch the first half of our videos, you might think we hate tech.

“Tech is spying on us”. “Tech is tracking our location”. “Tech is allowing governments and corporations to overreach into our lives”.

But actually, I’m a techno-optimist.

If you watch the second half of our videos, you’ll hear us say things like, “This is the tech that will protect us”. “Here’s the tech that empowers us”. “Here’s how to use technology to reclaim our digital freedoms”.

I recently put out a video exploring techno-optimism, and I was shocked by the responses. So many people were quick to throw in the towel. Comments like: “I don’t share your optimism—privacy is dead”. “Don’t even try, it’s pointless”. Another privacy advocate who makes video content, The Hated One, noticed this trend on his videos too. There’s been an uptick in people telling others to give up on privacy altogether.

Honestly, it feels like a psyop. Who benefits from us giving up? The answer is obvious: only the people surveilling us. Maybe the psyop has been so effective it’s taken on a life of its own. Many people are now willingly complicit, fueling the narrative and spreading defeatism. This attitude is toxic, and it has to stop. If you’ve already given up, we don’t stand a chance. The privacy battle is ultimately about human rights and freedom. Giving up isn’t an option.

But more importantly, the idea that privacy is hopeless couldn’t be further from the truth. We have every reason to feel energized and excited. For the first time, we have both the technology and the cultural momentum to reclaim our privacy. The solution to surveillance isn’t throwing out our devices—it’s embracing the incredible privacy tech already available. The tools we need are here. We need to use them, build more, and spread the word. We need to lean into this fight.

I’m a techno-optimist because I believe we have the power to create a better future. In this newsletter, I’ll show you privacy tools you can already start using today, and highlight groundbreaking advancements in our near future.

Tech is neutral—it’s how we use it that matters

Many people have been tricked into thinking that tech itself is the problem. I see it in the comments on our videos. Whenever we share privacy solutions, someone always says, “If you want privacy, you have to throw out your digital devices”.

But that’s not true. You don’t have to throw out your devices to reclaim your privacy. The idea that technology and privacy can’t coexist benefits the very corporations and governments surveilling us. It keeps us from even trying to protect ourselves.

The truth is, technology is neutral. It can be used for surveillance, but it can also be used for privacy. For decades, it’s been hijacked primarily for surveillance. But now we have cutting-edge tools to fight back. We have encryption technology that empowers us to reclaim our digital freedoms.

How privacy tech is empowering people worldwide

Privacy tech is already changing lives all over the world. Here are a few powerful examples:

  • Iran: During widespread protests against oppressive laws, the government implemented internet shutdowns and banned platforms like Signal and VPNs. Signal stepped up, providing instructions for setting up proxy servers. This allowed protestors to coordinate activities and share uncensored information despite the repression. These tools helped individuals reclaim freedom themselves without needing permission first. Knowing that the ability to stay connected with the outside world remains in our hands is incredibly empowering.
  • Mexico: Journalists face extreme danger from both the government and cartels. There’s an entire Wiki page dedicated to journalists who have been killed in Mexico for exposing corruption and violence. Privacy tools like encrypted messaging and private data storage help protect those doing important work—like investigative journalism—and their sources from harm.
  • China: The “Great Firewall” blocks platforms like Google, Instagram, and Twitter. Citizens rely on tools like VPNs, Tor, and encrypted apps to bypass censorship and stay informed. Privacy tech has become a vital form of resistance and hope for millions.

All over the world, people are using privacy tech to reclaim freedom and resist oppression.

Privacy tools you can start using today

Here are some tools you can incorporate into your life:

  • Messaging: Use end-to-end encrypted apps to ensure only you and the recipient can read your messages.
  • Browsers: Privacy-focused browsers block tracking pixels, scripts, and bounce tracking to protect you online.
  • Search Engines: Switch to alternatives that don’t log or track your searches.
  • Email: Try encrypted email services to keep your communications private.
  • Calendars: Use privacy-respecting calendars that offer end-to-end encryption.
  • Media: Explore apps that let you consume content without being tracked, or decentralized platforms that avoid gatekeeping.
  • VPNs and Tor: Hide your IP address and anonymize your activities with these essential tools.

We give examples of each in our latest video and have dedicated guides exploring each topic so you can decide which option is best for you.

The future of privacy tech

The future of privacy tech is even more exciting. Here’s what’s on the horizon:

  • Homomorphic Encryption: This allows data to be processed without ever being exposed. It could transform fields like healthcare and finance by enabling services to generate insights without accessing private data.
  • Decentralized Identity: These systems let individuals store and manage their credentials without relying on centralized databases, reducing risks of hacking and misuse. They also give users more granular control over what information they share.
  • Zero-Knowledge Proofs: These cryptographic methods let you prove something is true—like your age or identity—without sharing the underlying data.

The rise of privacy culture

It’s not just technology that’s advancing—our culture around privacy is shifting. For years, surveillance was seen as inevitable. But high-profile breaches, government overreach, and whistleblowers have opened the public’s eyes. People are voting with their wallets, choosing privacy-respecting services, and demanding accountability.

We’ve seen this firsthand. For example, our video series about car privacy has been seen by millions of people who are now waking up to the invasive reality of modern vehicles. Imagine if these millions started asking car dealerships tough questions about privacy policies before making a purchase. That’s how we shift the needle.

The future is bright, and in our hands

So yes, I’m a techno-optimist.

We’re far from powerless. For the first time, we have both the technology and the cultural momentum to take back our privacy. But we’ll only succeed if we stop demonizing technology and start harnessing the privacy tech at our disposal to break free from surveillance.

At the end of the day, technology is just a tool. It’s up to us to decide how to use it. Let’s choose a future where privacy thrives because of innovation—not in spite of it.

Thanks to the most incredible year we’ve seen at NBTV, more people than ever are joining the fight for privacy, and we’re all shifting culture. Next year is going to be even better.

Here’s to an incredible 2025. Let’s make it count!

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Spilling the Tea: KYC Is a liability, not a safety feature

Published today 8:12
– By Naomi Brockwell
5 minute read

This week, a devastating breach exposed tens of thousands of users of Tea, a dating safety app that asked women to verify their identity with selfies, government IDs, and location data.

Over 72,000 images were found in a publicly accessible Firebase database. No authentication required. 4chan users discovered the open bucket and immediately began downloading and sharing the contents: face scans, driver’s licenses, and private messages. Some users have already used the leaked IP addresses to build and circulate maps that attempt to track and trace the women in those files.

Tea confirmed the breach, claiming the data came from a legacy system. But that doesn’t change the core issue:
This data never should have been collected in the first place.

What’s marketed as safety often doubles as surveillance

Tea is just one example of a broader trend: platforms claiming to protect you while quietly collecting as much data as possible. “Verification” is marketed as a security feature, something you do for your own good. The app was pitched as a tool to help women vet potential dates, avoid abuse, and stay safe. But in practice, access required handing over deeply personal data. Face scans, government-issued IDs, and real-time location information became the price of entry.

This is how surveillance becomes palatable. The language of “just for verification” hides the reality. Users are given no transparency about where their data is stored, how long it is kept, or who can access it. These aren’t neutral design choices. They are calculated decisions that prioritize corporate protection, not user safety.

We need to talk about KYC

What happened with Tea reflects a much bigger issue. Identification is quietly becoming the default requirement for access to the internet. No ID? No entry. No selfie? No account. This is how KYC culture has expanded, moving far beyond finance into social platforms, community forums, and dating apps.

We’ve been taught to believe that identity verification equals safety. But time and again, that promise falls apart. Centralized databases get breached, IP addresses are logged and weaponized, and photos meant for internal review end up archived on the dark web.

If we want a safer internet, we need to stop equating surveillance with security. The real path to safety is minimizing what gets collected in the first place. That means embracing pseudonyms, decentralizing data, and building systems that do not rely on a single gatekeeper to decide who gets to participate.

“Your data will be deleted”. Yeah right.

Tea’s privacy policy stated in black and white:

Selfies and government ID images “will be deleted immediately following the completion of the verification process”.

Yet here we are. Over 72,000 images are now circulating online, scraped from an open Firebase bucket. That’s a direct contradiction of what users were told. And it’s not an isolated incident.

This kind of betrayal is becoming disturbingly common. Companies collect high-risk personal data and reassure users with vague promises:

“We only keep it temporarily”.
“We delete it right after verification”.
“It’s stored securely”.

These phrases are repeated often, to make us feel better about handing over our most private information. But there’s rarely any oversight, and almost never any enforcement.

At TSA checkpoints in the U.S., travelers are now being asked to scan their faces. The official line? The images are immediately deleted. But again, how do we know? Who verifies that? The public isn’t given access to the systems handling those scans. There’s no independent audit, no transparency, and we’re asked to trust blindly.

The truth is, we usually don’t know where our data goes. “Just for verification” has become an excuse for massive data collection. And even if a company intends to delete your data, it still exists long enough to be copied, leaked, or stolen.

Temporary storage is still storage.

This breach shows how fragile those assurances really are. Tea said the right things on paper, but in practice, their database was completely unprotected. That’s the reality behind most “privacy policies”: vague assurances, no independent oversight, and no consequences when those promises are broken.

KYC pipelines are a perfect storm of risk. They collect extremely sensitive data. They normalize giving it away. And they operate behind a curtain of unverifiable claims.

It’s time to stop accepting “don’t worry, it’s deleted” as a substitute for actual security. If your platform requires storing sensitive personal data, that data becomes a liability the moment it is collected.

The safest database is the one that never existed.

A delicate cultural moment

This story has touched a nerve. Tea was already controversial, with critics arguing it enabled anonymous accusations and blurred the line between caution and public shaming. Some see the breach as ironic, even deserved.

But that is not the lesson we should take from this.

The breach revealed how easily identity exposure has become normalized, how vulnerable we all are when ID verification is treated as the default, and how quickly sensitive data becomes ammunition once it slips out of the hands of those who collected it.

It’s a reminder that we are all vulnerable in a world that demands ID verification just to participate in daily life.

This isn’t just about one app’s failure. It’s a reflection of the dangerous norms we’ve accepted.

Takeaways

  • KYC is a liability, not a security measure. The more personal data a platform holds, the more dangerous a breach becomes.
  • Normalizing ID collection puts people at risk. The existence of a database is always a risk, no matter how noble the intent.
  • We can support victims of surveillance without endorsing every platform they use. Privacy isn’t conditional on whether we like someone or not.
  • It’s time to build tools that don’t require identity. True safety comes from architectures that protect by design.

Let this be a wake-up call. Not just for the companies building these tools, but for all of us using them. Think twice before handing over your ID or revealing your IP address to a platform you use.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Zuckerberg: Skipping AI glasses puts you at a “cognitive disadvantage”

The future of AI

Published yesterday 13:41
– By Editorial Staff
"The ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you", believes the Meta CEO.
2 minute read

Meta CEO Mark Zuckerberg warns that people without AI glasses will find themselves at a significant mental “disadvantage” in the future. During the company’s quarterly report, he shared his vision of glasses as the primary way to interact with artificial intelligence.

On Thursday, Meta released its quarterly report. In a call directed at investors, CEO Mark Zuckerberg spoke about the company’s investment in smart glasses and warned about the consequences of staying outside this development, reports TechCrunch.

I continue to think that glasses are basically going to be the ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you, Zuckerberg said during the investor call.

By adding screens, even more value can be unlocked, he argued, whether it involves holographic fields of vision or smaller displays in everyday AI glasses.

I think in the future, if you don’t have glasses that have AI – or some way to interact with AI – I think you’re … probably going to be at a pretty significant cognitive disadvantage compared to other people, he added.

Unexpected success

Meta has focused on “smart” glasses like the Ray-Ban Meta and Oakley Meta models. The glasses allow users to listen to music, take photos and ask questions to Meta AI. The products have become a surprising success – revenue from Ray-Ban Meta glasses more than tripled compared to the previous year.

However, the Reality Labs division has been costly. Meta reported $4,53 billion in operating losses for the second quarter, and since 2020, the unit has lost nearly $70 billion.

Competition is growing. OpenAI acquired Jony Ive’s startup company this spring for $6.5 billion to develop AI devices, while other companies are exploring AI brooches and pendants.

However, Zuckerberg is convinced about the future of glasses and connects them to the Metaverse vision.

The other thing that’s awesome about glasses is they are going to be the ideal way to blend the physical and digital worlds together, he concluded.

Meta has previously been known for contributing to the increasing surveillance society and has also ignored health aspects regarding radiation from wireless technology.

Samsung and Tesla sign billion-dollar deal for AI chip manufacturing

The future of AI

Published 31 July 2025
– By Editorial Staff
Image of the construction of Samsung's large chip factory in Taylor, located in Texas, USA.
2 minute read

South Korean tech giant Samsung has entered into a comprehensive agreement with Tesla to manufacture next-generation AI chips. The contract, which extends until 2033, is worth $16.5 billion and means Samsung will dedicate its new Texas-based factory to producing Tesla’s AI6 chips.

Samsung receives a significant boost for its semiconductor manufacturing through the new partnership with Tesla. The electric vehicle manufacturer has chosen to place production of its advanced AI6 chips at Samsung’s facility in Texas, in a move that could change competitive dynamics within the semiconductor industry, writes TechCrunch.

The strategic importance of this is hard to overstate, wrote Tesla founder Elon Musk on X when the deal was announced.

The agreement represents an important milestone for Samsung, which has previously struggled to attract and retain major customers for its chip manufacturing. According to Musk, Tesla may end up spending significantly more than the original $16.5 billion on Samsung chips.

Actual output is likely to be several times higher, he explained in a later post.

Tesla’s chip strategy takes shape

The AI6 chips form the core of Tesla’s ambition to evolve from car manufacturer to an AI and robotics company. The new generation chip is designed as an all-around solution that can be used both for the company’s Full Self-Driving system and for the humanoid robots of the Optimus model that Tesla is developing, as well as for high-performance AI training in data centers.

Tesla is working in parallel with Taiwanese chip manufacturer TSMC for production of AI5 chips, whose design was recently completed. These will initially be manufactured at TSMC’s facility in Taiwan and later also in Arizona. Samsung already produces Tesla’s AI4 chips.

Since 2019, Tesla has developed its own custom chips after leaving Nvidia’s Drive platform. The first self-developed chipset, known as FSD Computer or Hardware 3, was launched the same year and installed in all of the company’s electric vehicles.

Musk promises personal involvement

In an unusual turn, Samsung has agreed to let Tesla assist in maximizing manufacturing efficiency at the Texas factory. Musk has promised personal presence to accelerate progress.

This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house, he wrote.

The strategic partnership could give Samsung the stable customer volume the company needs to compete with industry leader TSMC, while Tesla secures access to advanced chip manufacturing for its growing AI ambitions.

Women’s app hacked – thousands of private images leaked

Published 29 July 2025
– By Editorial Staff
1 minute read

An app that helps women identify problematic men became a target for hackers. Over 70,000 images, including selfies and driver’s licenses, were leaked to 4chan.

The dating app Tea, which allows women to warn each other about “red flags” in men, suffered a major data breach last week. According to 404 Media, hackers from the 4chan forum managed to access 72,000 images from the app’s database, of which 13,000 were selfies and driver’s license photos.

The app was created by software developer Sean Cook, inspired by his mother’s “terrifying” dating experiences. Tea has over four million active users and topped Apple’s App Store last week.

Careless data handling

The company stored sensitive user data on Google’s cloud service Firebase, where the information became accessible to unauthorized parties. Several cybersecurity experts have criticized the company’s methods as “careless”.

— A company should never host users’ private data on a publicly accessible server, says Grant Ho, professor at the University of Chicago, to The Verge.

Andrew Guthrie Ferguson, law professor at George Washington University, warns that digital “whisper networks” lose control over sensitive information.

— What changes when it’s digital and recoverable and save-able and searchable is you lose control over it, he says.

Tea has launched an investigation together with external cybersecurity companies.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.