Sunday, August 3, 2025

Polaris of Enlightenment

KYC is the crime

The Coinbase hack shows how state-mandated surveillance is putting lives at risk.

Published 31 May 2025
– By Naomi Brockwell
4 minute read

Last week, Coinbase got hacked.

Hackers demanded a $20 million ransom after breaching a third-party system. They didn’t get passwords or crypto keys. But what they did get will put lives at risk:

  • Names
  • Home addresses
  • Phone numbers
  • Partial Social Security numbers
  • Identity documents
  • Bank info

That’s everything someone needs to impersonate you, blackmail you, or show up at your front door.

This isn’t hypothetical. There’s a growing wave of kidnappings and extortion targeting people with crypto exposure. Criminals are using leaked identity data to find victims and hold them hostage.

Let’s be clear: KYC doesn’t just put your data at risk. It puts people at risk.

Naturally, people are furious at any company that leaks their information.

But here’s the bigger issue:
No system is unhackable.
Every major institution, from the IRS to the State Department, has suffered breaches.
Protecting sensitive data at scale is nearly impossible.

And Coinbase didn’t want to collect this data.
Many companies don’t. It’s a massive liability.
They’re forced to, by law.

A new, dangerous normal

KYC, Know Your Customer, has become just another box to check.

Open a bank account? Upload your ID.
Use a crypto exchange? Add your selfie and utility bill.
Sign up for a payment app? Same thing.

But it wasn’t always this way.

Until the 1970s, you could walk into a bank with cash and open an account. Your financial life was private by default.

That changed with the Bank Secrecy Act of 1970, which required banks to start collecting and reporting customer activity to the government. Still, KYC wasn’t yet formalized. Each bank decided how well they needed to know someone. If you’d been a customer since childhood, or had a family member vouch for you, that was often enough.

Then came the Patriot Act, which turned KYC into law. It required every financial institution to collect, verify, and store identity documents from every customer, not just for large or suspicious transactions, but for basic access to the financial system.

From that point on, privacy wasn’t the default. It was erased.

The real-world cost

Today, everyone is surveilled all the time.
We’ve built an identity dragnet, and people are being hurt because of it.

Criminals use leaked KYC data to find and target people, and it’s not just millionaires. It’s regular people, and sometimes their parents, partners, or even children.

It’s happened in London, Buenos Aires, Dubai, Lagos, Los Angeles, all over the world.
Some are robbed. Some are held for ransom.
Some don’t survive.

These aren’t edge cases. They’re the direct result of forcing companies to collect and store sensitive personal data.

When we force companies to hoard identity data, we guarantee it will eventually fall into the wrong hands.

There are two types of companies, those that have been hacked, and those that don’t yet know they’ve been hacked” – former Cisco CEO, John Chambers

What KYC actually does

KYC turns every financial institution into a surveillance node.
It turns your personal information into a liability.

It doesn’t just increase risk — It creates it.

KYC is part of a global surveillance infrastructure. It feeds into databases governments share and query without your knowledge. It creates chokepoints where access to basic services depends on surrendering your privacy. And it deputizes companies to collect and hold sensitive data they never wanted.

If you’re trying to rob a vault, you go where the gold is.
If you’re trying to target people, you go where the data lives.

KYC creates those vaults, legally mandated, poorly secured, and irresistible to attackers.

Does it even work?

We’re told KYC is necessary to stop terrorism and money laundering.

But the top reasons banks file “suspicious activity reports” are banal, like someone withdrawing “too much” of their own money.

We’re told to accept this surveillance because it might stop a bad actor someday.

In practice, it does more to expose innocent people than to catch criminals.

KYC doesn’t prevent crime.
It creates the conditions for it.

A Better Path Exists

We don’t have to live like this.

Better tools already exist, tools that allow verification without surveillance:

  • Zero-Knowledge Proofs (ZKPs): Prove something (like your age or citizenship) without revealing documents
  • Decentralized Identity (DID): You control what gets shared, and with whom
  • Homomorphic Encryption: Allows platforms to verify encrypted data without ever seeing it

But maybe it’s time to question something deeper.
Why is centralized, government-mandated identity collection the foundation of participation in financial life?

This surveillance regime didn’t always exist. It was built.

And just because it’s now common doesn’t mean we should accept it.

We didn’t need it before. We don’t need it now.

It’s time to stop normalizing mass surveillance as a condition for basic financial access.

The system isn’t protecting us.
It’s putting us in danger.

It’s time to say what no one else will

KYC isn’t a necessary evil.
It’s the original sin of financial surveillance.

It’s not a flaw in the system.
It is the system.

And the system needs to go.

Takeaways

  • Check https://HaveIBeenPwned.com to see how much of your identity is already exposed
  • Say no to services that hoard sensitive data
  • Support better alternatives that treat privacy as a baseline, not an afterthought

Because safety doesn’t come from handing over more information.

It comes from building systems that never need it in the first place.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Spilling the Tea: KYC Is a liability, not a safety feature

Published yesterday 8:12
– By Naomi Brockwell
5 minute read

This week, a devastating breach exposed tens of thousands of users of Tea, a dating safety app that asked women to verify their identity with selfies, government IDs, and location data.

Over 72,000 images were found in a publicly accessible Firebase database. No authentication required. 4chan users discovered the open bucket and immediately began downloading and sharing the contents: face scans, driver’s licenses, and private messages. Some users have already used the leaked IP addresses to build and circulate maps that attempt to track and trace the women in those files.

Tea confirmed the breach, claiming the data came from a legacy system. But that doesn’t change the core issue:
This data never should have been collected in the first place.

What’s marketed as safety often doubles as surveillance

Tea is just one example of a broader trend: platforms claiming to protect you while quietly collecting as much data as possible. “Verification” is marketed as a security feature, something you do for your own good. The app was pitched as a tool to help women vet potential dates, avoid abuse, and stay safe. But in practice, access required handing over deeply personal data. Face scans, government-issued IDs, and real-time location information became the price of entry.

This is how surveillance becomes palatable. The language of “just for verification” hides the reality. Users are given no transparency about where their data is stored, how long it is kept, or who can access it. These aren’t neutral design choices. They are calculated decisions that prioritize corporate protection, not user safety.

We need to talk about KYC

What happened with Tea reflects a much bigger issue. Identification is quietly becoming the default requirement for access to the internet. No ID? No entry. No selfie? No account. This is how KYC culture has expanded, moving far beyond finance into social platforms, community forums, and dating apps.

We’ve been taught to believe that identity verification equals safety. But time and again, that promise falls apart. Centralized databases get breached, IP addresses are logged and weaponized, and photos meant for internal review end up archived on the dark web.

If we want a safer internet, we need to stop equating surveillance with security. The real path to safety is minimizing what gets collected in the first place. That means embracing pseudonyms, decentralizing data, and building systems that do not rely on a single gatekeeper to decide who gets to participate.

“Your data will be deleted”. Yeah right.

Tea’s privacy policy stated in black and white:

Selfies and government ID images “will be deleted immediately following the completion of the verification process”.

Yet here we are. Over 72,000 images are now circulating online, scraped from an open Firebase bucket. That’s a direct contradiction of what users were told. And it’s not an isolated incident.

This kind of betrayal is becoming disturbingly common. Companies collect high-risk personal data and reassure users with vague promises:

“We only keep it temporarily”.
“We delete it right after verification”.
“It’s stored securely”.

These phrases are repeated often, to make us feel better about handing over our most private information. But there’s rarely any oversight, and almost never any enforcement.

At TSA checkpoints in the U.S., travelers are now being asked to scan their faces. The official line? The images are immediately deleted. But again, how do we know? Who verifies that? The public isn’t given access to the systems handling those scans. There’s no independent audit, no transparency, and we’re asked to trust blindly.

The truth is, we usually don’t know where our data goes. “Just for verification” has become an excuse for massive data collection. And even if a company intends to delete your data, it still exists long enough to be copied, leaked, or stolen.

Temporary storage is still storage.

This breach shows how fragile those assurances really are. Tea said the right things on paper, but in practice, their database was completely unprotected. That’s the reality behind most “privacy policies”: vague assurances, no independent oversight, and no consequences when those promises are broken.

KYC pipelines are a perfect storm of risk. They collect extremely sensitive data. They normalize giving it away. And they operate behind a curtain of unverifiable claims.

It’s time to stop accepting “don’t worry, it’s deleted” as a substitute for actual security. If your platform requires storing sensitive personal data, that data becomes a liability the moment it is collected.

The safest database is the one that never existed.

A delicate cultural moment

This story has touched a nerve. Tea was already controversial, with critics arguing it enabled anonymous accusations and blurred the line between caution and public shaming. Some see the breach as ironic, even deserved.

But that is not the lesson we should take from this.

The breach revealed how easily identity exposure has become normalized, how vulnerable we all are when ID verification is treated as the default, and how quickly sensitive data becomes ammunition once it slips out of the hands of those who collected it.

It’s a reminder that we are all vulnerable in a world that demands ID verification just to participate in daily life.

This isn’t just about one app’s failure. It’s a reflection of the dangerous norms we’ve accepted.

Takeaways

  • KYC is a liability, not a security measure. The more personal data a platform holds, the more dangerous a breach becomes.
  • Normalizing ID collection puts people at risk. The existence of a database is always a risk, no matter how noble the intent.
  • We can support victims of surveillance without endorsing every platform they use. Privacy isn’t conditional on whether we like someone or not.
  • It’s time to build tools that don’t require identity. True safety comes from architectures that protect by design.

Let this be a wake-up call. Not just for the companies building these tools, but for all of us using them. Think twice before handing over your ID or revealing your IP address to a platform you use.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Zuckerberg: Skipping AI glasses puts you at a “cognitive disadvantage”

The future of AI

Published 1 August 2025
– By Editorial Staff
"The ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you", believes the Meta CEO.
2 minute read

Meta CEO Mark Zuckerberg warns that people without AI glasses will find themselves at a significant mental “disadvantage” in the future. During the company’s quarterly report, he shared his vision of glasses as the primary way to interact with artificial intelligence.

On Thursday, Meta released its quarterly report. In a call directed at investors, CEO Mark Zuckerberg spoke about the company’s investment in smart glasses and warned about the consequences of staying outside this development, reports TechCrunch.

I continue to think that glasses are basically going to be the ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you, Zuckerberg said during the investor call.

By adding screens, even more value can be unlocked, he argued, whether it involves holographic fields of vision or smaller displays in everyday AI glasses.

I think in the future, if you don’t have glasses that have AI – or some way to interact with AI – I think you’re … probably going to be at a pretty significant cognitive disadvantage compared to other people, he added.

Unexpected success

Meta has focused on “smart” glasses like the Ray-Ban Meta and Oakley Meta models. The glasses allow users to listen to music, take photos and ask questions to Meta AI. The products have become a surprising success – revenue from Ray-Ban Meta glasses more than tripled compared to the previous year.

However, the Reality Labs division has been costly. Meta reported $4,53 billion in operating losses for the second quarter, and since 2020, the unit has lost nearly $70 billion.

Competition is growing. OpenAI acquired Jony Ive’s startup company this spring for $6.5 billion to develop AI devices, while other companies are exploring AI brooches and pendants.

However, Zuckerberg is convinced about the future of glasses and connects them to the Metaverse vision.

The other thing that’s awesome about glasses is they are going to be the ideal way to blend the physical and digital worlds together, he concluded.

Meta has previously been known for contributing to the increasing surveillance society and has also ignored health aspects regarding radiation from wireless technology.

Samsung and Tesla sign billion-dollar deal for AI chip manufacturing

The future of AI

Published 31 July 2025
– By Editorial Staff
Image of the construction of Samsung's large chip factory in Taylor, located in Texas, USA.
2 minute read

South Korean tech giant Samsung has entered into a comprehensive agreement with Tesla to manufacture next-generation AI chips. The contract, which extends until 2033, is worth $16.5 billion and means Samsung will dedicate its new Texas-based factory to producing Tesla’s AI6 chips.

Samsung receives a significant boost for its semiconductor manufacturing through the new partnership with Tesla. The electric vehicle manufacturer has chosen to place production of its advanced AI6 chips at Samsung’s facility in Texas, in a move that could change competitive dynamics within the semiconductor industry, writes TechCrunch.

The strategic importance of this is hard to overstate, wrote Tesla founder Elon Musk on X when the deal was announced.

The agreement represents an important milestone for Samsung, which has previously struggled to attract and retain major customers for its chip manufacturing. According to Musk, Tesla may end up spending significantly more than the original $16.5 billion on Samsung chips.

Actual output is likely to be several times higher, he explained in a later post.

Tesla’s chip strategy takes shape

The AI6 chips form the core of Tesla’s ambition to evolve from car manufacturer to an AI and robotics company. The new generation chip is designed as an all-around solution that can be used both for the company’s Full Self-Driving system and for the humanoid robots of the Optimus model that Tesla is developing, as well as for high-performance AI training in data centers.

Tesla is working in parallel with Taiwanese chip manufacturer TSMC for production of AI5 chips, whose design was recently completed. These will initially be manufactured at TSMC’s facility in Taiwan and later also in Arizona. Samsung already produces Tesla’s AI4 chips.

Since 2019, Tesla has developed its own custom chips after leaving Nvidia’s Drive platform. The first self-developed chipset, known as FSD Computer or Hardware 3, was launched the same year and installed in all of the company’s electric vehicles.

Musk promises personal involvement

In an unusual turn, Samsung has agreed to let Tesla assist in maximizing manufacturing efficiency at the Texas factory. Musk has promised personal presence to accelerate progress.

This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house, he wrote.

The strategic partnership could give Samsung the stable customer volume the company needs to compete with industry leader TSMC, while Tesla secures access to advanced chip manufacturing for its growing AI ambitions.

Women’s app hacked – thousands of private images leaked

Published 29 July 2025
– By Editorial Staff
1 minute read

An app that helps women identify problematic men became a target for hackers. Over 70,000 images, including selfies and driver’s licenses, were leaked to 4chan.

The dating app Tea, which allows women to warn each other about “red flags” in men, suffered a major data breach last week. According to 404 Media, hackers from the 4chan forum managed to access 72,000 images from the app’s database, of which 13,000 were selfies and driver’s license photos.

The app was created by software developer Sean Cook, inspired by his mother’s “terrifying” dating experiences. Tea has over four million active users and topped Apple’s App Store last week.

Careless data handling

The company stored sensitive user data on Google’s cloud service Firebase, where the information became accessible to unauthorized parties. Several cybersecurity experts have criticized the company’s methods as “careless”.

— A company should never host users’ private data on a publicly accessible server, says Grant Ho, professor at the University of Chicago, to The Verge.

Andrew Guthrie Ferguson, law professor at George Washington University, warns that digital “whisper networks” lose control over sensitive information.

— What changes when it’s digital and recoverable and save-able and searchable is you lose control over it, he says.

Tea has launched an investigation together with external cybersecurity companies.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.