Sunday, June 22, 2025

Polaris of Enlightenment

KYC is the crime

The Coinbase hack shows how state-mandated surveillance is putting lives at risk.

Published 31 May 2025
– By Naomi Brockwell
4 minute read

Last week, Coinbase got hacked.

Hackers demanded a $20 million ransom after breaching a third-party system. They didn’t get passwords or crypto keys. But what they did get will put lives at risk:

  • Names
  • Home addresses
  • Phone numbers
  • Partial Social Security numbers
  • Identity documents
  • Bank info

That’s everything someone needs to impersonate you, blackmail you, or show up at your front door.

This isn’t hypothetical. There’s a growing wave of kidnappings and extortion targeting people with crypto exposure. Criminals are using leaked identity data to find victims and hold them hostage.

Let’s be clear: KYC doesn’t just put your data at risk. It puts people at risk.

Naturally, people are furious at any company that leaks their information.

But here’s the bigger issue:
No system is unhackable.
Every major institution, from the IRS to the State Department, has suffered breaches.
Protecting sensitive data at scale is nearly impossible.

And Coinbase didn’t want to collect this data.
Many companies don’t. It’s a massive liability.
They’re forced to, by law.

A new, dangerous normal

KYC, Know Your Customer, has become just another box to check.

Open a bank account? Upload your ID.
Use a crypto exchange? Add your selfie and utility bill.
Sign up for a payment app? Same thing.

But it wasn’t always this way.

Until the 1970s, you could walk into a bank with cash and open an account. Your financial life was private by default.

That changed with the Bank Secrecy Act of 1970, which required banks to start collecting and reporting customer activity to the government. Still, KYC wasn’t yet formalized. Each bank decided how well they needed to know someone. If you’d been a customer since childhood, or had a family member vouch for you, that was often enough.

Then came the Patriot Act, which turned KYC into law. It required every financial institution to collect, verify, and store identity documents from every customer, not just for large or suspicious transactions, but for basic access to the financial system.

From that point on, privacy wasn’t the default. It was erased.

The real-world cost

Today, everyone is surveilled all the time.
We’ve built an identity dragnet, and people are being hurt because of it.

Criminals use leaked KYC data to find and target people, and it’s not just millionaires. It’s regular people, and sometimes their parents, partners, or even children.

It’s happened in London, Buenos Aires, Dubai, Lagos, Los Angeles, all over the world.
Some are robbed. Some are held for ransom.
Some don’t survive.

These aren’t edge cases. They’re the direct result of forcing companies to collect and store sensitive personal data.

When we force companies to hoard identity data, we guarantee it will eventually fall into the wrong hands.

There are two types of companies, those that have been hacked, and those that don’t yet know they’ve been hacked” – former Cisco CEO, John Chambers

What KYC actually does

KYC turns every financial institution into a surveillance node.
It turns your personal information into a liability.

It doesn’t just increase risk — It creates it.

KYC is part of a global surveillance infrastructure. It feeds into databases governments share and query without your knowledge. It creates chokepoints where access to basic services depends on surrendering your privacy. And it deputizes companies to collect and hold sensitive data they never wanted.

If you’re trying to rob a vault, you go where the gold is.
If you’re trying to target people, you go where the data lives.

KYC creates those vaults, legally mandated, poorly secured, and irresistible to attackers.

Does it even work?

We’re told KYC is necessary to stop terrorism and money laundering.

But the top reasons banks file “suspicious activity reports” are banal, like someone withdrawing “too much” of their own money.

We’re told to accept this surveillance because it might stop a bad actor someday.

In practice, it does more to expose innocent people than to catch criminals.

KYC doesn’t prevent crime.
It creates the conditions for it.

A Better Path Exists

We don’t have to live like this.

Better tools already exist, tools that allow verification without surveillance:

  • Zero-Knowledge Proofs (ZKPs): Prove something (like your age or citizenship) without revealing documents
  • Decentralized Identity (DID): You control what gets shared, and with whom
  • Homomorphic Encryption: Allows platforms to verify encrypted data without ever seeing it

But maybe it’s time to question something deeper.
Why is centralized, government-mandated identity collection the foundation of participation in financial life?

This surveillance regime didn’t always exist. It was built.

And just because it’s now common doesn’t mean we should accept it.

We didn’t need it before. We don’t need it now.

It’s time to stop normalizing mass surveillance as a condition for basic financial access.

The system isn’t protecting us.
It’s putting us in danger.

It’s time to say what no one else will

KYC isn’t a necessary evil.
It’s the original sin of financial surveillance.

It’s not a flaw in the system.
It is the system.

And the system needs to go.

Takeaways

  • Check https://HaveIBeenPwned.com to see how much of your identity is already exposed
  • Say no to services that hoard sensitive data
  • Support better alternatives that treat privacy as a baseline, not an afterthought

Because safety doesn’t come from handing over more information.

It comes from building systems that never need it in the first place.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Deepfakes are getting scary good

Why your next “urgent” call or ad might be an AI scam.

Published today 9:35
– By Naomi Brockwell
4 minute read

This week I watched a compilation of video clips that looked absolutely real. Police officers, bank managers, disaster relief workers, product endorsements… but every single one was generated by AI. None of the people, voices, or backdrops ever existed.

It’s fascinating… and chilling. Because the potential for misuse is growing fast, and most people aren’t ready.

This same technology already works in real time. Someone can join a Zoom call, flip a switch, and suddenly look and sound like your boss, your spouse, or your favorite celebrity. That opens the door to a new generation of scams, and people everywhere are falling for them.

The old scripts, supercharged by AI

“Ma’am, I’m seeing multiple threats on your computer. I need remote access right now to secure your files”.
Tech-support scams used to rely on a shaky phone line and a thick accent. Now an AI voice clone mimics a calm AppleCare rep, shares a fake malware alert, and convinces you to install remote-control software. One click later, they’re digging through your files and draining your bank account.

“We’ve detected suspicious activity on your account. Please verify your login”.
Phishing emails are old news. But now people are getting FaceTime calls that look like their bank manager. The cloned face reads off the last four digits of your card, then asks you to confirm the rest. That’s all they need.

“Miracle hearing aids are only $14.99 today. Click to order”.
Fake doctors in lab coats (generated by AI) are popping up in ads, selling junk gadgets. The product either never arrives, or the site skims your card info.

“We just need your Medicare number to update your benefits for this year.”
Seniors are being targeted with robocalls that splice in their grandchild’s real voice. Once the scammer gets your Medicare ID, they start billing for fake procedures that mess up your records.

“Congratulations, you’ve won $1,000,000! Just pay the small claiming fee today”.
Add a fake newscaster to an old lottery scam, and suddenly it feels real. Victims rush to “claim their prize” and wire the fee… straight to a fraudster.

“We’re raising funds for a sick parishioner—can you grab some Apple gift cards?”
Community members are seeing AI-generated videos of their own pastor asking for help. Once the card numbers are sent, they’re gone.

“Can you believe these concert tickets are so cheap?”
AI-generated influencers post about crazy ticket deals. Victims buy, receive a QR code, and show up at the venue, only to find the code has already been used.

“Help our disaster-relief effort.”
Hours after a real hurricane or earthquake, fake charity appeals start circulating. The links look urgent and heartfelt, and route donations to crypto wallets that vanish.

Why we fall for it and what to watch out for

High pressure
Every scammer plays the same four notes: fear, urgency, greed, and empathy. They hit you with a problem that feels like an emergency, offer a reward too good to miss, or ask for help in a moment of vulnerability. These scams only work if you rush. That’s their weak spot. If something feels high-pressure, pause. Don’t make decisions in panic. You can always ask someone you trust for advice.

Borrowed credibility
Deepfakes hijack your instincts. When someone looks and sounds like your boss, your parent, or a celebrity, your brain wants to trust them. But just because you recognize the face doesn’t mean it’s real. Don’t assume a familiar voice or face is proof of identity. Synthetic media can be convincing enough to fool even close friends.

Trusted platforms become delivery trucks
We tend to relax when something comes through a trusted source — like a Zoom call, a blue-check account, or an ad on a mainstream site. But scammers exploit that trust. Just because something shows up on a legitimate platform doesn’t mean it’s safe. The platform’s credibility rubs off on the fake.

Deepfakes aren’t just a technology problem, they’re a human one. For most of history, our eyes and ears were reliable lie detectors. Now, that shortcut is broken. And until our instincts catch up, skepticism is your best defense.

How to stay one step ahead

  1. Slow the game down.
    Scammers rely on speed. Hang up, close the tab, take a breath. If it’s real, it’ll still be there in five minutes.
  2. Verify on a second channel.
    If your “bank” or “boss” calls, reach out using a number or app you already trust. Don’t rely on the contact info they provide.
  3. Lock down big moves.
    Use two-factor authentication, passphrases, or code words for any important accounts or transactions.
  4. Educate your circle.
    Most deepfake losses happen because someone else panicked. Talk to your family, especially seniors. Share this newsletter. Report fake ads. Keep each other sharp.

Many of these scams fall apart the moment you stop and think. The goal is always the same: get you to act fast. But you don’t have to play along.

Stay calm. Stay sharp. Stay skeptical.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Tech company bankrupt – “advanced AI” was 700 Indians

Published 14 June 2025
– By Editorial Staff
“AI washing” refers to a company exaggerating or lying about their products or services being powered by advanced artificial intelligence in order to attract investors and customers.
2 minute read

An AI company that marketed itself as a technological pioneer – and attracted investments from Microsoft, among others – has gone bankrupt. In the aftermath, it has been revealed that the technology was largely based on human labor, despite promises of advanced artificial intelligence.

Builder.ai, a British startup formerly known as Engineer.ai, claimed that their AI assistant Natasha could build apps as easily as ordering pizza. But as early as 2019, the Wall Street Journal revealed that much of the coding was actually done manually by a total of about 700 programmers in India.

Despite the allegations, Builder.ai secured over $450 million in funding from investors such as Microsoft, Qatar Investment Authority, IFC, and SoftBank’s DeepCore. At its peak, the company was valued at $1.5 billion.

In May 2025, founder and CEO Sachin Dev Duggal stepped down from his position, and when the new management took over, it emerged that the revelations made in 2019 were only the tip of the iceberg. For example, the company had reported revenues of $220 million in 2024, while the actual figures were $55 million. Furthermore, the company is suspected of inflating the figures through circular transactions and fake sales via “third-party resellers”, reports the Financial Times.

Following the new revelations, lenders froze the company’s account, forcing Builder.ai into bankruptcy. The company is now accused of so-called AI washing, which means that a company exaggerates or falsely claims that its products or services are powered by advanced artificial intelligence in order to attract investors and customers.

The company’s heavy promotion of “Natasha” as a revolutionary AI solution turned out to be a facade – behind the deceptive marketing ploy lay traditional, human-driven work and financial irregularities.

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published 14 June 2025
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Swedish police urge parents to delete chat apps from children’s phones

organized crime

Published 13 June 2025
– By Editorial Staff
2 minute read

Ahead of the summer holidays, the Swedish police are warning that criminal gangs are using social media to recruit young people into crime. On Facebook, the authorities have published a list of apps that parents should keep a close eye on – or delete immediately.

Critics argue, however, that the list is arbitrary and that it is strange for the police to urge parents to delete apps that are used by Swedish authorities.

During the summer holidays, adults are often less present in young people’s everyday lives, while screen time increases. According to the police, this creates increased vulnerability. Criminal networks then try to recruit young people to handle weapons, sell drugs, or participate in serious violent crimes such as shootings and explosions.

To prevent this, a national information campaign has been launched in collaboration with the County Administrative Board. The police, together with the County Administrative Board, have compiled a list of mobile apps that they believe pose a significant risk:

  • Delete immediately: Signal, Telegram, Wickr Me
  • Keep control over: Snapchat, WhatsApp, Discord, Messenger
  • Monitor closely: TikTok, Instagram

Digital parental presence

Maja Karlsson, municipal police officer in Jönköping, also emphasizes the importance of digital parental presence:

We need to increase digital control and knowledge about which apps my child is using, who they are in contact with, and why they have downloaded different types of communication apps.

The police recommend that parents talk openly with their children about what they do online and use technical aids such as parental controls.

– There are tools available for parents who find it difficult. It’s not impossible, help is available, Karlsson continues.

Parents are also encouraged to establish fixed routines for their children and ensure they have access to meaningful summer activities.

“Complete madness”

However, the list has been met with harsh criticism from several quarters. Users point out that the Signal app is also used by the Swedish Armed Forces and question why the police list it as dangerous.

If general apps like Signal are considered dangerous, the phone app and text messaging should be first on the list”, writes another user.

Critics argue that it is not the apps themselves but how they are used that is crucial, and find it remarkable that the police are arbitrarily and without deeper justification telling parents which messaging apps are okay to use and which are not.

Complete madness to recommend uninstalling chat apps so broadly. You should know better”, comments another upset reader.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.