Friday, September 5, 2025

Polaris of Enlightenment

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published 14 June 2025
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

IP addresses are used in Sweden to track unemployed people

Published 1 September 2025
– By Editorial Staff
The Swedish Public Employment Service has already identified approximately 4,000 people who appear to have logged in from a country other than Sweden.
2 minute read

The Swedish Public Employment Service (Arbetsförmedlingen) has begun tracking the IP addresses of unemployed individuals to verify that they are actually located in Sweden. Approximately 4,000 people who logged in from foreign IP numbers now risk losing their benefits.

To be eligible for unemployment insurance (A-kassa) and other forms of compensation linked to being unemployed, certain requirements must be met. One of these requirements is that individuals must be located in Sweden, in order to be available in case a job opportunity arises.

When job seekers log into the Swedish Public Employment Service’s website, their IP address is now checked. If a person logs in from a foreign IP number, this suggests that they are located in another country.

The Swedish Public Employment Service has been tracking job seekers since the end of June, and the agency has already identified approximately 4,000 people who appear to have logged in from a country other than Sweden.

It’s a way to counteract the risk of incorrect payments. We’re talking about people who are abroad even though they should be in Sweden looking for work or participating in labor market policy programs, says Andreas Malmgren, operations controller at the Swedish Public Employment Service, to the Bonnier publication DN.

None of these individuals have been contacted yet, but the agency plans to make contact during September. These people risk having their benefits withdrawn.

Furthermore, the agency has also established a special tool to check whether job seekers are using VPN services, so that no one ends up among those flagged by mistake.

Wifi signals can identify people with 95 percent accuracy

Mass surveillance

Published 21 August 2025
– By Editorial Staff
2 minute read

Italian researchers have developed a technique that can track and identify individuals by analyzing how wifi signals reflect off human bodies. The method works even when people change clothes and can be used for surveillance.

Researchers at La Sapienza University in Rome have developed a new method for identifying and tracking people using wifi signals. The technique, which the researchers call “WhoFi”, can recognize people with an accuracy rate of up to 95 percent, reports Sweclockers.

The method is based on the fact that wifi signals reflect and refract in different ways when they hit human bodies. By analyzing these reflection patterns using machine learning and artificial neural networks, researchers can create unique “fingerprints” for each individual.

Works despite clothing changes

Experiments show that these digital fingerprints are stable enough to identify people even when they change clothes or carry backpacks. The average recognition rate is 88 percent, which researchers say is comparable to other automatic identification methods.

The research results were published in mid-July and describe how the technology could be used in surveillance contexts. According to the researchers, WhoFi can solve the problem of re-identifying people who were first observed via a surveillance camera in one location and then need to be found in footage from cameras in other locations.

Can be used for surveillance

The technology opens up new possibilities in security surveillance, but simultaneously raises questions about privacy and personal security. The fact that wifi networks, which are ubiquitous in today’s society, can be used to track people without their knowledge represents a new dimension of digital surveillance.

The researchers present their discovery as a breakthrough in the field of automatic person identification, but do not address the ethical implications that the technology may have for individuals’ privacy.

Danish students build drone that flies and swims

Published 18 August 2025
– By Editorial Staff
2 minute read

Four students at Aalborg University in Denmark have developed a revolutionary drone that seamlessly transitions between air and water. The prototype uses innovative rotor technology that automatically adapts to different environments.

Four students at Aalborg University in Denmark have created something that sounds like science fiction – a drone that can literally fly down into water, swim around and then jump back up into the air to continue flying, reports Tom’s Hardware.

Students Andrei Copaci, Pawel Kowalczyk, Krzysztof Sierocki and Mikolaj Dzwigalo have developed a prototype as their thesis project that demonstrates how future amphibious drones could function. The project has attracted attention from technology media after a demonstration video showed the drone flying over a pool, crashing down into the water, navigating underwater and then taking off into the air again.

Intelligent rotor technology solves the challenge

The secret behind the impressive performance lies in what the team calls a “variable rotor system”. The individual rotor blades can automatically adjust their pitch angle depending on whether the drone is in air or water.

When the drone flies through the air, the rotor blades work at a higher angle for optimal lift capacity. Underwater, the blade pitch is lowered to reduce resistance and improve efficiency during navigation. The system can also reverse thrust to increase maneuverability when the drone moves through tight passages underwater.

Most components in the prototype have been manufactured by the students themselves using 3D printers, since equivalent parts were not available on the market.

Although the project is still in an early concept stage and exists only as a single prototype, it demonstrates the possibilities for future amphibious vehicles. The technology could have applications in everything from rescue operations to environmental monitoring where vehicles need to move both above and below the water surface.

What I learnt at DEFCON

Why hacker culture is essential if we want to win the privacy war.

Published 16 August 2025
– By Naomi Brockwell
6 minute read

DEFCON is the world’s largest hacker conference. Every year, tens of thousands of people gather in Las Vegas to share research, run workshops, compete in capture-the-flag tournaments, and break things for sport. It’s a subculture. A testing ground. A place where some of the best minds in security and privacy come together not just to learn, but to uncover what’s being hidden from the rest of us. It’s where curiosity runs wild.

But to really get DEFCON, you have to understand the people.

What is a hacker?

I love hacker conferences because of the people. Hackers are notoriously seen as dangerous. The stereotype is that they wear black hoodies and Guy Fawkes masks.

But that’s not why they’re dangerous: They’re dangerous because they ask questions and have relentless curiosity.

Hackers have a deep-seated drive to learn how things work, not just at the surface, but down to their core.

They aren’t content with simply using tech. They want to open it up, examine it, and see the hidden gears turning underneath.

A hacker sees a device and doesn’t just ask, “What does it do?”
They ask, “What else could it do?”
“What isn’t it telling me?”
“What’s under the hood, and why does no one want me to look?”

They’re curious enough to pull back curtains others want to remain closed.

They reject blind compliance and test boundaries.
When society says “Do this,” hackers ask “Why?”

They don’t need a rulebook or external approval.
They trust their own instincts and intelligence.
They’re guided by internal principles, not external prescriptions.
They’re not satisfied with the official version. They challenge it.

Because of this, hackers are often at the fringes of society. They’re comfortable with being misunderstood or even vilified. Hackers are unafraid to reveal truths that powerful entities want buried.

But that position outside the mainstream gives them perspective: They see what others miss.

Today, the word “hack” is everywhere:
Hack your productivity.
Hack your workout.
Hack your life.

What it really means is:
Don’t accept the defaults.
Look under the surface.
Find a better way.

That’s what makes hacker culture powerful.
It produces people who will open the box even when they’re told not to.
People who don’t wait for permission to investigate how the tools we use every day are compromising us.

That insistence on curiosity, noncompliance, and pushing past the surface to see what’s buried underneath is exactly what we need in a world built on hidden systems of control.

We should all aspire to be hackers, especially when it comes to confronting power and surveillance.

Everything is computer

Basically every part of our lives runs on computers now.
Your phone. Your car. Your thermostat. Your TV. Your kid’s toys.
And much of this tech has been quietly and invisibly hijacked for surveillance.

Companies and governments both want your data. And neither want you asking how these data collection systems work.

We’re inside a deeply connected world, built on an opaque infrastructure that is extracting behavioral data at scale.

You have a right to know what’s happening inside the tech you use every day.
Peeking behind the curtain is not a crime. It’s a public service.

In today’s world, the hacker mindset is not just useful. It’s necessary.

Hacker culture in a surveillance world

People who ask questions are a nightmare for those who want to keep you in the dark.
They know how to dig.
They don’t take surveillance claims at face value.
They know how to verify what data is actually being collected.
They don’t trust boilerplate privacy policies or vague legalese.
They reverse-engineer SDKs.
They monitor network traffic.
They intercept outgoing requests and inspect payloads.

And they don’t ask for permission.

That’s what makes hacker culture so important. If we want any hope of reclaiming privacy, we need people with the skills and the willingness to pull apart the systems we’re told not to question.

On top of that, governments and corporations both routinely use outdated and overbroad legislation like the Computer Fraud and Abuse Act (CFAA) to prosecute public-interest researchers who investigate tech. Not because those researchers cause harm, but because they reveal things that others want kept hidden.

Laws like this pressure people towards compliance, and make them afraid to ask questions. The result is that curiosity feels like a liability, and it becomes harder for the average person to understand how the digital systems around us actually work.

That’s why the hacker mindset matters so much: Because no matter how hard the system pushes back, they keep asking questions.

The researchers I met at DEFCON

This year at DEFCON, I met researchers who are doing exactly that.

People uncovering surveillance code embedded in children’s toys.
People doing analysis on facial recognition SDKs.
People testing whether your photo is really deleted after “verification”.
People capturing packets who discovered that the “local only” systems you’re using aren’t local at all, and are sending your data to third parties.
People analyzing “ephemeral” IDs, and finding that your data was being stored and linked back to real identities.

You’ll be hearing from some of them on our channel in the coming months.
Their work is extraordinary, and helping all of us move towards a world of informed consent instead of blind compliance. Without this kind of research, the average person has no way to know what’s happening behind the scenes. We can’t make good decisions about the tech we use if we don’t know what it’s doing.

Make privacy cool again

Making privacy appealing is not just about education.
It’s about making it cool.

Hacker culture has always been at the forefront of turning fringe ideas into mainstream trends. Films like Hackers and The Matrix made hackers a status symbol. Movements like The Crypto Wars (when the government fought Phil Zimmermann over PGP), and the Clipper Chip fights (when they tried to standardize surveillance backdoors across hardware) made cypherpunks and privacy activists aspirational.

Hackers take the things mainstream culture mocks or fears, and make them edgy and cool.

That’s what we need here. We need a cultural transformation and to push back against the shameful language that demands we justify our desire for privacy.

You shouldn’t have to explain why you don’t want to be watched.
You shouldn’t have to defend your decision to protect your communications.

Make privacy a badge of honor.
Make privacy tools a status symbol.
Make the act of encrypting, self-hosting, and masking your identity a signal that says you’re independent, intelligent, and not easily manipulated.

Show that the people who care about privacy are the same people who invent the future.

Most people don’t like being trailblazers, because it’s scary. But if you’re reading this, you’re one of the early adopters, which means you’re already one of the fearless ones.

When you take a stand visibly, you create a quorum and make it safer for others to join in. That’s how movements grow, and we go from being weirdos in the corner to becoming the majority.

If privacy is stigmatized, reclaiming it will take bold, fearless, visible action.
The hacker community is perfectly positioned to lead that charge, and to make it safe for the rest of the world to follow.

When you show up and say, “I care about this,” you give others permission to care too.

Privacy may be on the fringe right now, but that’s where all great movements begin.

Final Thoughts

What I learnt at DEFCON is that curiosity is powerful.
Refusal to comply is powerful.
The simple act of asking questions can be revolutionary.

There are systems all around us extracting data and consolidating control, and most people don’t know how to fight that, and are too scared to try.

Hacker culture is the secret sauce.

Let’s apply this drive to the systems of surveillance.
Let’s investigate the tools we’ve been told to trust.
Let’s explain what’s actually happening.
Let’s give people the knowledge they need to make better choices.

Let’s build a world where curiosity isn’t criminalized but celebrated.

DEFCON reminded me that we don’t need to wait for permission to start doing that.

We can just do things.

So let’s start now.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.