OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published June 14, 2025 – By Naomi Brockwell

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, "We won’t keep this", and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises "Temporary Chat", the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it's decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: "The only way to respect user privacy is to not keep their data in the first place".

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Watch as Russia’s AI robot falls on stage

Published November 13, 2025 – By Editorial staff

Russia's first humanoid AI robot fell on stage during its official launch in Moscow this week. Staff rushed forward to shield the damaged robot while attempting to fix the malfunction.

What was meant to be a grand launch of Russia's venture into humanoid robotics ended in embarrassment. To the sounds from the Rocky film, the robot AIdol was led onto the stage by two staff members at a technology event in the Russian capital.

But the presentation ended in chaos when the robot lost its balance and crashed to the ground. Several parts came loose and staff hurried to pull the machine away and hide it behind a screen.

Behind the project is the Russian robotics company Idol, led by Vladimir Vitukhin. According to the company, AIdol is an advanced robot built mostly from domestic components.

Vitukhin explained the fall as a calibration problem and emphasized that the robot is still in the testing phase.

This is real-time learning, when a good mistake turns into knowledge, and a bad mistake turns into experience, Vitukhin said, according to Newsweek.

Despite the company's attempts to downplay the incident, criticism has been massive on Russian tech forums and social media. Many question the decision to showcase an obviously unfinished prototype.

AIdol is powered by a 48-volt battery that provides up to six hours of operation. The machine is equipped with 19 servo motors and a silicon skin designed to recreate human facial expressions.

The robot can smile, think, and be surprised – just like a person, Vitukhin said.

According to reports, AIdol consists of 77 percent Russian-produced components. After the fall, developers have withdrawn the machine while engineers examine the balance systems.

Italian political consultant became victim of spyware program

Totalitarianism

Published November 11, 2025 – By Editorial staff
Francesco Nicodemo.

An Italian political advisor who worked for center-left parties has gone public about being hacked through an advanced Israeli-developed spyware program. Francesco Nicodemo is the latest in a growing list of victims in a spyware scandal that is shaking Italy and raising questions about how intelligence services use surveillance technology.

Francesco Nicodemo, who works as a consultant for left-leaning politicians in Italy, waited ten months before publicly disclosing that he had been targeted by the Paragon spyware program. On Thursday, he chose to break his silence in a post on Facebook.

Nicodemo explained that he had previously not wanted to publicize his case because he "didn't want to be used for political propaganda," but that "the time has now come".

"It's time to ask a very simple question: Why? Why me? How is it possible that such a sophisticated and complex tool was used to spy on a private citizen, as if he were a drug dealer or a subversive threat to the country?", Nicodemo wrote. "I have nothing more to say. More people must speak out. Others must explain what happened".

Extensive scandal grows

Nicodemo's revelation once again expands the scope of the ongoing spyware scandal in Italy. Among those affected are several journalists, migration activists, prominent business leaders, and now a political consultant with a history of working for the center-left party Partito Democratico and its politicians.

The online publication Fanpage reported first that Nicodemo was among the people who received a notification from WhatsApp in January that they had been targeted by the spyware program.

Questions about usage

Governments and spyware manufacturers have long claimed that their surveillance products are used against serious criminals and terrorists, but recent cases show that this is not always the case.

— The Italian government has provided certain spyware victims with clarity and explained the cases. But others remain disturbingly unclear, says John Scott-Railton, a senior researcher at The Citizen Lab who has investigated spyware companies and their abuses for years.

None of this looks good for Paragon, or for Italy. That's why clarity from the Italian government is so essential. I believe that if they wanted to, Paragon could give everyone much more clarity about what's going on. Until they do, these cases will remain a burden on their shoulders, adds Scott-Railton, who confirmed that Nicodemo received the notification from WhatsApp.

Intelligence services' involvement

It is still unclear which of Paragon's customers hacked Nicodemo, but an Italian parliamentary committee confirmed in June that some of the victims in Italy were hacked by Italian intelligence services, which report to Prime Minister Giorgia Meloni's government.

In February, following revelations about the first victims in Italy, Paragon severed ties with its government customers in the country, specifically the intelligence services AISE and AISI.

The parliamentary committee COPASIR later concluded in June that some of the publicly identified Paragon victims, namely the migration activists, had been legally hacked by Italian intelligence services. However, the committee found no evidence that Francesco Cancellato, editor of the news site Fanpage.it which had investigated the youth organization of Meloni's governing party, had been hacked by the intelligence services.

Paragon, which has an active contract with the U.S. Immigration and Customs Enforcement agency, states that the U.S. government is one of its customers.

FACTS: Paragon

Paragon Solutions is an Israeli cybersecurity company that develops advanced spyware for intelligence services and law enforcement agencies. The software can be used to monitor smartphones and other digital devices.

The company was acquired by American private equity giant AE Industrial and has since been merged with cybersecurity firm REDLattice. Paragon's clients include the US government, including the Immigration and Customs Enforcement (ICE) agency.

In February 2024, Paragon terminated its contracts with Italian intelligence services AISE and AISI after several Italian citizens, including journalists and activists, were identified as victims of the company's spyware.

Paragon is marketed as a tool against serious crime and terrorism, but its use in Italy has raised questions about whether the spyware is also being used against political opponents and journalists.

Email was never built for privacy

Mass surveillance

How Proton makes email privacy simple.

Published November 8, 2025 – By Naomi Brockwell

Email was never built for privacy. It’s closer to a digital postcard than a sealed letter, bouncing through and sitting on servers you don’t control, and mainstream providers like Gmail read and analyze everything that is inside.

Email isn’t going anywhere in our society, it’s baked into how the digital world communicates. But luckily there are ways to make your emails more private. One tool that you can use is PGP, which stands for “Pretty Good Privacy”.

PGP is one of the oldest and most powerful tools for email privacy. It takes your message and locks it with the recipient’s public key, so only they can unlock it with their private key. That means even if someone intercepts the email, whether it’s a hacker, your ISP, or a government agency, they see only scrambled text.

Unfortunately it is notoriously complicated. Normally, you’d have to install command-line tools, generate keys manually, and run cryptic commands just to send an encrypted email.

But Proton Mail makes all of that easy, and builds PGP right into your inbox.

How Proton makes PGP simple

Proton is a great, privacy-focused email provider (and no they’re not sponsoring this newsletter, they’re simply an email provider that I like to use).

If you email someone within the Proton ecosystem (ie send an email from one Proton user to another Proton user), your email is automatically end-to-end encrypted using PGP.

But what if you email someone outside of the Proton ecosystem?

Here’s where it would usually get tricky.

First, you’d need to install a PGP client, which is a program that lets you generate and manage your encryption keys.

Then you’d run command-line prompts, choosing the key type, size, expiration, associating the email you want to use the key with, and you’d export your public key. It’s complicated.

But if you use Proton, they make using PGP super easy.

Let’s go through how to use it.

Automatic search for public PGP key

First of all, when you type an email address into the “To” field in Proton Mail, it automatically searches for a public PGP key associated with that address. Proton checks its own network, your contact list, and Web Key Directory (WKD) on the associated email domain.

WKD is a small web‑standard that allows someone to publish their public key at their domain in a way that makes it easily findable for an email app. For example if Proton finds a key for a certain address at the associated domain, Proton will automatically encrypt a message with it.

If they find a key, you’ll see a green lock next to the recipient in the ‘To’ field, indicating the message will be encrypted.

You don’t need to copy, paste, or import anything. It just works.

Great, your email has been automatically encrypted using PGP, and only the recipient of the email will be able to use their private key to decrypt it.

Manually uploading someone’s PGP key

What if Proton doesn’t automatically find someone’s PGP key? You can hunt down the key manually and import it. Some people will have their key available on their website, either in plain text, or as a .asc file. Proton allows you to save this PGP key in your contacts.

To add one manually, first you type their email address in the “to” field.

Then right-click on that address, and select “view contact details”

Then click the settings wheel to go to email settings, and select “show advanced PGP settings”

Under “public keys”, select “upload” and upload their public key in an .asc format.

Once the key is uploaded, the “encrypt emails” toggle will automatically switch on, and all future emails to that contact will automatically be protected with PGP. You can turn that off at any time, and also remove or replace the public key.

How do others secure emails to you using PGP?

Super! So you’ve sent an encrypted email to someone using their PGP key. What if they want to send you an email back, will that be automatically end-to-end encrypted (E2EE) using PGP? Not necessarily.

In order for someone to send you an end-to-end encrypted email, they need your public PGP key.

Download your public-private key pair inside Proton

Proton automatically generates a public-private key pair for each address that you have configured inside Proton Mail, and manages encryption inside its own network.

If you want people outside Proton to be able to encrypt messages to you, the first step is to export your public key from your Proton account so you can share it with them.

To do this:

  • Go to Setting
  • Click “All settings”
  • Select “encryption and keys”
  • Under “email encryption keys” you’ll have a dropdown menu of all your email addresses associated with your Proton account. Select the address that you want to export the public key for.
  • Under the “action” column, click “export public key”

It will download as an .asc file, and ask you where you want to save the file.

Normally a PGP key is written in 1s and 0s that your computer can read. The .asc file takes that key and wraps it in readable characters, and it ends up in a format that looks something like this:

Sharing your public key

Now that you’ve downloaded the public key, how do you share it with people so that they can contact you privately? There are several ways.

For @proton.me and @protonmail.com addresses, Proton publishes your public key in its WKD automatically. You don’t have to do anything.

For custom domains configured in Proton Mail, Proton doesn’t host WKD for you. You can publish WKD yourself on your own domain by serving it at a special path on your website. Or you can delegate WKD to a managed service. Or if you don’t want to use WKD at all, you can upload your key to a public keyserver like keys.openpgp.org, which provides another way for mail apps to discover it.

We’re not going to cover those setups in this article. Instead here are simpler ways to share your public key:

1) You can send people your .asc file directly if you want them to be able to encrypt emails to you (be sure to let them know which email address is associated with this key), or you can host this .asc file on your website for people to download.

2) You can open the .asc file in a text editor and copy and paste the key, and then send people this text, or upload the text on your website. This is what I have done:

This way if anyone wants to send me an email more privately, they can do so.

But Proton makes it even easier to share your PGP key: you can opt to automatically attach your public key to every email.

To turn this on:

  1. Go to Settings → Encryption & keys → External PGP settings
  2. Enable
    • Sign external messages
    • Attach public key

Once this is on, every email you send will automatically include your public key file, as a small .asc text file.

This means anyone using a PGP-capable mail client (like Thunderbird, Mailvelope, etc.) can import it immediately, with no manual steps required.

Password-protected emails

Proton also lets you send password-protected emails, so even if the other person doesn’t use PGP you can still keep the contents private. This isn’t PGP -- Proton encrypts the message and attachments in your browser and the recipient gets a link to a secure viewing page. They enter a password you share separately to open it. Their provider (like Gmail) only sees a notification email with a link, not the message itself. You can add a password hint, and the message expires after a set time (28 days by default).

The bottom line

Email privacy doesn’t have to be painful. Proton hides the complexity by adding a password option, or automating a lot of the PGP process for you: it automatically looks up recipients’ keys, encrypts your messages, and makes your key easy for others to use when they reply.

As Phil Zimmermann, the creator of PGP, explained in Why I Wrote PGP:

“PGP empowers people to take their privacy into their own hands. There has been a growing social need for it. That’s why I wrote it".

We’re honored to have Mr. Zimmermann on our board of advisors at Ludlow Institute.

Pioneers like him fought hard so we could protect our privacy. It’s on us to use the tools they gave us.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Swedish police secretly using Palantir’s surveillance system for years

Mass surveillance

Published November 4, 2025 – By Editorial staff
Palantir Technologies headquarters in Silicon Valley.

The Swedish Police Authority has for at least five years been using an AI-based analysis tool from the notorious American security company Palantir.

The program, which has been specially adapted for Swedish conditions, can within seconds compile comprehensive profiles of individuals by combining data from various registers.

Behind the system stands the American tech company Palantir, which is internationally controversial and has been accused of involvement in surveillance activities. This summer, the company was identified in a UN report as complicit in genocide in Gaza.

The Swedish version of Palantir's Gotham platform is called Acus and uses artificial intelligence to compile, analyze and visualize large amounts of information. According to an investigation by the left-wing newspaper Dagens ETC, investigators using the system can quickly obtain detailed personal profiles that combine data from surveillance and criminal registers with information from Bank-id (Sweden's national digital identification system), mobile operators and social media.

A former analyst employed by the police, who chooses to remain anonymous, describes to the newspaper how the system was surrounded by great secrecy:

— There was very much hush-hush around that program.

Rejection of document requests

When the newspaper requested information about the system and how it is used, they were met with rejection. The Swedish Police Authority cited confidentiality and stated that they can neither "confirm nor deny relationships with Palantir" citing "danger to national security".

This is not the first time Palantir's tools have been used in Swedish law enforcement. In the high-profile Operation Trojan Shield, the FBI, with support from Palantir's technology, managed to infiltrate and intercept the encrypted messaging app Anom.

The operation led to the arrest of a large number of people connected to serious crime, both in Sweden and internationally. The FBI called the operation "a shining example of innovative law enforcement".

But the method has also received criticism. Attorney Johan Grahn, who has represented defendants in several Anom-related cases, is critical of the approach.

— In these cases, it has been indiscriminate mass surveillance, he states.

Mapping dissidents

Palantir has long sparked debate due to its assignments and methods. The company works with both American agencies and foreign security services.

In the United States, the surveillance company's systems are used to map undocumented immigrants. In the United Kingdom, British police have been criticized for using the company's technology to build registers of citizens' sex lives, political views, religious affiliation, ethnicity and union involvement – information that according to observers violates fundamental privacy principles.

This summer, a UN report also identified Palantir as co-responsible for acts of genocide in Gaza, after the company's analysis tools were allegedly used in attacks where Palestinian civilians were killed.

How extensive the Swedish police's use of the system is, and what legal frameworks govern the handling of Swedish citizens' personal data in the platform, remains unclear as long as the Swedish Police Authority chooses to keep the information classified.