Monday, June 16, 2025

Polaris of Enlightenment

KYC is the crime

The Coinbase hack shows how state-mandated surveillance is putting lives at risk.

Published 31 May 2025
– By Naomi Brockwell
4 minute read

Last week, Coinbase got hacked.

Hackers demanded a $20 million ransom after breaching a third-party system. They didn’t get passwords or crypto keys. But what they did get will put lives at risk:

  • Names
  • Home addresses
  • Phone numbers
  • Partial Social Security numbers
  • Identity documents
  • Bank info

That’s everything someone needs to impersonate you, blackmail you, or show up at your front door.

This isn’t hypothetical. There’s a growing wave of kidnappings and extortion targeting people with crypto exposure. Criminals are using leaked identity data to find victims and hold them hostage.

Let’s be clear: KYC doesn’t just put your data at risk. It puts people at risk.

Naturally, people are furious at any company that leaks their information.

But here’s the bigger issue:
No system is unhackable.
Every major institution, from the IRS to the State Department, has suffered breaches.
Protecting sensitive data at scale is nearly impossible.

And Coinbase didn’t want to collect this data.
Many companies don’t. It’s a massive liability.
They’re forced to, by law.

A new, dangerous normal

KYC, Know Your Customer, has become just another box to check.

Open a bank account? Upload your ID.
Use a crypto exchange? Add your selfie and utility bill.
Sign up for a payment app? Same thing.

But it wasn’t always this way.

Until the 1970s, you could walk into a bank with cash and open an account. Your financial life was private by default.

That changed with the Bank Secrecy Act of 1970, which required banks to start collecting and reporting customer activity to the government. Still, KYC wasn’t yet formalized. Each bank decided how well they needed to know someone. If you’d been a customer since childhood, or had a family member vouch for you, that was often enough.

Then came the Patriot Act, which turned KYC into law. It required every financial institution to collect, verify, and store identity documents from every customer, not just for large or suspicious transactions, but for basic access to the financial system.

From that point on, privacy wasn’t the default. It was erased.

The real-world cost

Today, everyone is surveilled all the time.
We’ve built an identity dragnet, and people are being hurt because of it.

Criminals use leaked KYC data to find and target people, and it’s not just millionaires. It’s regular people, and sometimes their parents, partners, or even children.

It’s happened in London, Buenos Aires, Dubai, Lagos, Los Angeles, all over the world.
Some are robbed. Some are held for ransom.
Some don’t survive.

These aren’t edge cases. They’re the direct result of forcing companies to collect and store sensitive personal data.

When we force companies to hoard identity data, we guarantee it will eventually fall into the wrong hands.

There are two types of companies, those that have been hacked, and those that don’t yet know they’ve been hacked” – former Cisco CEO, John Chambers

What KYC actually does

KYC turns every financial institution into a surveillance node.
It turns your personal information into a liability.

It doesn’t just increase risk — It creates it.

KYC is part of a global surveillance infrastructure. It feeds into databases governments share and query without your knowledge. It creates chokepoints where access to basic services depends on surrendering your privacy. And it deputizes companies to collect and hold sensitive data they never wanted.

If you’re trying to rob a vault, you go where the gold is.
If you’re trying to target people, you go where the data lives.

KYC creates those vaults, legally mandated, poorly secured, and irresistible to attackers.

Does it even work?

We’re told KYC is necessary to stop terrorism and money laundering.

But the top reasons banks file “suspicious activity reports” are banal, like someone withdrawing “too much” of their own money.

We’re told to accept this surveillance because it might stop a bad actor someday.

In practice, it does more to expose innocent people than to catch criminals.

KYC doesn’t prevent crime.
It creates the conditions for it.

A Better Path Exists

We don’t have to live like this.

Better tools already exist, tools that allow verification without surveillance:

  • Zero-Knowledge Proofs (ZKPs): Prove something (like your age or citizenship) without revealing documents
  • Decentralized Identity (DID): You control what gets shared, and with whom
  • Homomorphic Encryption: Allows platforms to verify encrypted data without ever seeing it

But maybe it’s time to question something deeper.
Why is centralized, government-mandated identity collection the foundation of participation in financial life?

This surveillance regime didn’t always exist. It was built.

And just because it’s now common doesn’t mean we should accept it.

We didn’t need it before. We don’t need it now.

It’s time to stop normalizing mass surveillance as a condition for basic financial access.

The system isn’t protecting us.
It’s putting us in danger.

It’s time to say what no one else will

KYC isn’t a necessary evil.
It’s the original sin of financial surveillance.

It’s not a flaw in the system.
It is the system.

And the system needs to go.

Takeaways

  • Check https://HaveIBeenPwned.com to see how much of your identity is already exposed
  • Say no to services that hoard sensitive data
  • Support better alternatives that treat privacy as a baseline, not an afterthought

Because safety doesn’t come from handing over more information.

It comes from building systems that never need it in the first place.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Tech company bankrupt – “advanced AI” was 700 Indians

Published 14 June 2025
– By Editorial Staff
“AI washing” refers to a company exaggerating or lying about their products or services being powered by advanced artificial intelligence in order to attract investors and customers.
2 minute read

An AI company that marketed itself as a technological pioneer – and attracted investments from Microsoft, among others – has gone bankrupt. In the aftermath, it has been revealed that the technology was largely based on human labor, despite promises of advanced artificial intelligence.

Builder.ai, a British startup formerly known as Engineer.ai, claimed that their AI assistant Natasha could build apps as easily as ordering pizza. But as early as 2019, the Wall Street Journal revealed that much of the coding was actually done manually by a total of about 700 programmers in India.

Despite the allegations, Builder.ai secured over $450 million in funding from investors such as Microsoft, Qatar Investment Authority, IFC, and SoftBank’s DeepCore. At its peak, the company was valued at $1.5 billion.

In May 2025, founder and CEO Sachin Dev Duggal stepped down from his position, and when the new management took over, it emerged that the revelations made in 2019 were only the tip of the iceberg. For example, the company had reported revenues of $220 million in 2024, while the actual figures were $55 million. Furthermore, the company is suspected of inflating the figures through circular transactions and fake sales via “third-party resellers”, reports the Financial Times.

Following the new revelations, lenders froze the company’s account, forcing Builder.ai into bankruptcy. The company is now accused of so-called AI washing, which means that a company exaggerates or falsely claims that its products or services are powered by advanced artificial intelligence in order to attract investors and customers.

The company’s heavy promotion of “Natasha” as a revolutionary AI solution turned out to be a facade – behind the deceptive marketing ploy lay traditional, human-driven work and financial irregularities.

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published 14 June 2025
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Swedish police urge parents to delete chat apps from children’s phones

organized crime

Published 13 June 2025
– By Editorial Staff
2 minute read

Ahead of the summer holidays, the Swedish police are warning that criminal gangs are using social media to recruit young people into crime. On Facebook, the authorities have published a list of apps that parents should keep a close eye on – or delete immediately.

Critics argue, however, that the list is arbitrary and that it is strange for the police to urge parents to delete apps that are used by Swedish authorities.

During the summer holidays, adults are often less present in young people’s everyday lives, while screen time increases. According to the police, this creates increased vulnerability. Criminal networks then try to recruit young people to handle weapons, sell drugs, or participate in serious violent crimes such as shootings and explosions.

To prevent this, a national information campaign has been launched in collaboration with the County Administrative Board. The police, together with the County Administrative Board, have compiled a list of mobile apps that they believe pose a significant risk:

  • Delete immediately: Signal, Telegram, Wickr Me
  • Keep control over: Snapchat, WhatsApp, Discord, Messenger
  • Monitor closely: TikTok, Instagram

Digital parental presence

Maja Karlsson, municipal police officer in Jönköping, also emphasizes the importance of digital parental presence:

We need to increase digital control and knowledge about which apps my child is using, who they are in contact with, and why they have downloaded different types of communication apps.

The police recommend that parents talk openly with their children about what they do online and use technical aids such as parental controls.

– There are tools available for parents who find it difficult. It’s not impossible, help is available, Karlsson continues.

Parents are also encouraged to establish fixed routines for their children and ensure they have access to meaningful summer activities.

“Complete madness”

However, the list has been met with harsh criticism from several quarters. Users point out that the Signal app is also used by the Swedish Armed Forces and question why the police list it as dangerous.

If general apps like Signal are considered dangerous, the phone app and text messaging should be first on the list”, writes another user.

Critics argue that it is not the apps themselves but how they are used that is crucial, and find it remarkable that the police are arbitrarily and without deeper justification telling parents which messaging apps are okay to use and which are not.

Complete madness to recommend uninstalling chat apps so broadly. You should know better”, comments another upset reader.

Organic Maps – the map app that doesn’t map you

Advertising partnership with Teuton Systems

Tired of Google Maps tracking you? Here's the free alternative that lets you navigate completely offline!

Published 12 June 2025
Organic Maps allow you to navigate completely offline when you have poor coverage or are hiking in the wilderness, for example.
4 minute read

In our series on open, surveillance-free apps, we take a closer look at Organic Maps – a map app that stands out as a privacy-friendly alternative to Google Maps. For many smartphone users, Google Maps has become the standard for navigation, but that convenience comes at a price: extensive collection of location data and dependence on a constant internet connection. Organic Maps is a free, open-source app (FOSS) that takes a completely different approach. Here, you can navigate without being tracked and without being tied to an internet connection.

Unlike Google Maps, which is neither open source nor particularly privacy-friendly, Organic Maps is built on open source and created by a community. The source code is openly available, which means that independent developers can review and improve the app. Most importantly, Organic Maps does not contain any tracker features – it does not collect your personal information or location data at all.

The app also has no ads or hidden data collection services running in the background. You don’t need to log in or give away any information – privacy is a core principle. Thanks to the open code, users can trust that there are no ulterior motives; it’s all about providing maps and navigation, nothing else.

Works completely offline – everywhere

One of the biggest advantages of Organic Maps is that the app works completely offline. All map data is based on the community project OpenStreetMap, which covers the entire world. You choose which maps (countries or regions) you want to download to your phone, and then you can navigate freely without the internet. Unlike Google and Apple Maps – whose offline features are very limited and lack full search or navigation functionality outside of the network – Organic Maps offers 100% of its features without a connection.

Searching for addresses and places, viewing points of interest, and turn-by-turn voice guidance work just as well offline as online. This means you can use the app in airplane mode, abroad without roaming, or far out in the wilderness.

Sample screenshots from Organic Maps: An offline map of some nature reserves, navigation in night mode, menu for downloading maps, and menu for map layers.

Since Organic Maps is based on OpenStreetMap, you also get very detailed maps. The community updates the maps continuously with everything from new bike paths to small forest trails. For example, a technology writer noted that he has yet to encounter a hiking trail that is missing from Organic Maps’ maps – often there is information that large map services miss. This makes the app particularly popular among outdoor enthusiasts, but everyone benefits: even regular roads, addresses, and points of interest are extensively covered thanks to OpenStreetMap. In short, the offline map gives you the peace of mind that the map is always available, no matter where you are.

Battery-efficient navigation

Offline navigation not only gives you freedom from the mobile network – it also saves battery power. Organic Maps is remarkably energy efficient and uses minimal power compared to many other navigation services. Without constant data traffic, background tracking, or heavy advertising, the app can focus on what it’s supposed to do and nothing more. One reviewer says he used the app during several days of hiking without having to charge his phone.

The developers themselves claim that you can go on a week-long trip on a single charge with Organic Maps as your guide. For those who travel frequently or are simply tired of GPS draining their battery, this is a game-changer. Its energy efficiency also makes Organic Maps well suited for older or simpler smartphones that may have weaker batteries – the app is lightweight and resource-efficient.

Available for Android and iPhone

Despite its different philosophy, Organic Maps is as easy to get and use as any popular app. The app is available to download for free for both Android and iOS – you can find it in the Google Play Store and Apple’s App Store. For those who use completely Google-free phones (such as GrapheneOS on Matrix mobile), it is also available through alternative open app stores such as F-Droid.

The interface is intuitive and similar to other map apps, so the barrier to switching is low. You can search for addresses or businesses, bookmark your favorite places, and get turn-by-turn voice directions. All these features are available offline after you download the maps for the area you need. In short, you get a full-featured map service on your phone – but without the surveillance.

Pre-installed on the Matrix phone

Organic Maps has become a staple in privacy-focused circles. Teuton Systems pre-installs the app on its Matrix phone – a security-focused Android smartphone based on GrapheneOS – as part of a Google-free ecosystem. This gives users a map service that respects their privacy right from the start. But even if you don’t own a Matrix mobile phone, you can still easily enjoy the benefits. Replacing Google Maps with Organic Maps on your current phone is a step towards a more privacy-secure everyday life, without losing any functionality. The app is completely free and open for everyone to try.

Organic Maps exemplifies how free and open software can give us, the average user, more control. You don’t have to worry about being tracked when you look up an address or navigate to a destination, and you can trust that the app only does what it says it does. The combination of open source code, offline capability, and top-notch privacy has earned the app excellent recommendations in tech media.

For those who value their privacy – or just want a reliable map app that works everywhere – Organic Maps is an inspiring alternative that shows it’s possible to navigate freely without giving up your privacy!

 

Features of Organic Maps

The ultimate app for travelers, tourists, hikers and cyclists:

  • Detailed offline maps with locations not found on other maps, thanks to OpenStreetMap
  • Bike paths, hiking trails and walking routes
  • Contour lines, elevation profiles, peaks and slopes
  • Turn-by-turn navigation for walking, cycling and car navigation with voice guidance, Android Auto
  • Quick offline map search
  • Export and import bookmarks in KML/KMZ format, import GPX
  • Dark mode to protect your eyes
  • Countries and regions do not take up much space
  • Free and open source

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.