Thursday, July 31, 2025

Polaris of Enlightenment

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published 14 June 2025
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Women’s app hacked – thousands of private images leaked

Published 29 July 2025
– By Editorial Staff
1 minute read

An app that helps women identify problematic men became a target for hackers. Over 70,000 images, including selfies and driver’s licenses, were leaked to 4chan.

The dating app Tea, which allows women to warn each other about “red flags” in men, suffered a major data breach last week. According to 404 Media, hackers from the 4chan forum managed to access 72,000 images from the app’s database, of which 13,000 were selfies and driver’s license photos.

The app was created by software developer Sean Cook, inspired by his mother’s “terrifying” dating experiences. Tea has over four million active users and topped Apple’s App Store last week.

Careless data handling

The company stored sensitive user data on Google’s cloud service Firebase, where the information became accessible to unauthorized parties. Several cybersecurity experts have criticized the company’s methods as “careless”.

— A company should never host users’ private data on a publicly accessible server, says Grant Ho, professor at the University of Chicago, to The Verge.

Andrew Guthrie Ferguson, law professor at George Washington University, warns that digital “whisper networks” lose control over sensitive information.

— What changes when it’s digital and recoverable and save-able and searchable is you lose control over it, he says.

Tea has launched an investigation together with external cybersecurity companies.

Vogue faces backlash over use of AI generated model

Published 29 July 2025
– By Editorial Staff
The woman on the left in Vogue magazine does not exist in reality but has instead been created using AI.
2 minute read

Fashion magazine Vogue is using an AI-generated model in a new advertising campaign for clothing brand Guess. This has sparked strong reactions – from both readers and industry professionals – who warn about unrealistic beauty standards.

In the campaign, a blonde woman poses in a summer dress. The fine print reveals that the model was created by AI company Seraphinne Vallora. The criticism is extensive, with critics arguing that these ideals are unattainable – even for real models.

Wow! As if the beauty expectations weren’t unrealistic enough, here comes AI to make them impossible”, writes one person on platform X.

Some readers are so upset about the use of AI models that they are choosing to boycott the magazine because it has “lost its credibility” and are calling the practice “worrying”.

Creates unhealthy beauty standards

Fashion magazines have long been influential in shaping beauty standards, particularly for women. During the 2010s, a backlash grew against the thin “size zero” ideal. More and more publications began featuring models of different sizes within the so-called plus-size trend. Vogue, which has been described as “high fashion’s bible”, was slow to follow suit, leading to criticism. Only after pressure did the magazine begin showing greater diversity on its covers.

The use of AI models now raises concerns about new, inhuman standards, says Vanessa Longley, CEO of the organization Beat, which works against eating disorders.

If people are exposed to images of unrealistic bodies, it can affect their thoughts about their own body, and poor body image increases the risk of developing an eating disorder, she tells BBC.

Former model Sinead Bovell, who five years ago actually wrote an article about how AI models risk replacing real models, also criticizes the campaign. She questions how it might affect those working in the fashion industry, but above all believes it risks having a negative effect on people’s mental health.

Beauty standards are already being influenced by AI. There are young girls getting plastic surgery to look like a face in a filter – and now we see people who are entirely artificial, she says.

Vogue told the BBC that the AI model was an advertisement, not an editorial decision, but declined to comment further. Guess has also not commented on the criticism of its advertisement.

Lidl challenges tech giants with own cloud service for European digital freedom

Published 28 July 2025
– By Editorial Staff
German discount retailer Lidl is now launching the cloud service StackIT.
2 minute read

German discount retailer Lidl is taking an unexpected step into the tech world by launching the cloud service StackIT – an attempt to challenge Amazon and Microsoft while strengthening Europe’s digital independence. The venture marks Lidl’s ambition to reduce European dependence on foreign tech companies.

Lidl, primarily known for its grocery stores and operating in all EU countries, has through its parent company Schwarz Group – one of the world’s largest privately-owned companies – announced plans to become a player in the technology sector.

The venture is seen as a way to secure technological sovereignty. Instead of relying on American cloud services like AWS and Azure, the group is choosing to build its own digital infrastructure through subsidiary Schwarz Digits.

The cloud service StackIT is reportedly being developed as a GDPR-compliant alternative – with hopes of attracting European companies with competitive pricing.

The StackIT venture is seen as part of a broader European movement to reduce dependence on American tech giants.

Amazon and Microsoft dominate

Amazon and Microsoft currently dominate the cloud services market with enormous resources, while Schwarz Group’s investments still remain at a clearly lower level.

European players today control only about 15 percent of the regional cloud market, according to Synergy Research Group, while Amazon, Microsoft and Google control around 70 percent.

However, Lidl’s unique position as Europe’s largest retailer is something the company hopes can serve as a platform to influence the market.

If StackIT can combine Lidl’s reach with EU initiatives and tools, as well as attract companies seeking GDPR-compliant and cost-effective solutions, the cloud venture could become a catalyst for greater digital freedom within Europe.

The challenge remains enormous, but even symbolic success would send a powerful signal that Europe is serious about its technological independence.

Amazon acquires AI company that records everything you say

Mass surveillance

Published 27 July 2025
– By Editorial Staff
3 minute read

Tech giant Amazon has acquired the Swedish AI company Bee, which develops wearable devices that continuously record users’ conversations. The deal signals Amazon’s ambitions to expand within AI-driven hardware beyond its voice-controlled home assistants.

The acquisition was confirmed by Bee founder Maria de Lourdes Zollo in a LinkedIn post, while Amazon told tech site TechCrunch that the deal has not yet been completed. Bee employees have been offered positions within Amazon.

AI wristband that listens constantly

Bee, which raised €6.4 million in venture capital last year, manufactures both a standalone wristband similar to Fitbit and an Apple Watch app. The product costs €46 (approximately $50) plus a monthly subscription of €17 ($18).

The device records everything it hears – unless the user manually turns it off – with the goal of listening to conversations to create reminders and to-do lists. According to the company’s website, they want “everyone to have access to a personal, ambient intelligence that feels less like a tool and more like a trusted companion.”

Bee has previously expressed plans to create a “cloud phone” that mirrors the user’s phone and gives the device access to accounts and notifications, which would enable reminders about events or sending messages.

Competitors struggle in the market

Other companies like Rabbit and Humane AI have tried to create similar AI-driven wearable devices but so far without major success. However, Bee’s device is significantly more affordable than competitors’ – the Humane AI Pin cost €458 – making it more accessible to curious consumers who don’t want to make a large financial investment.

The acquisition marks Amazon’s interest in wearable AI devices, a different direction from the company’s voice-controlled home assistants like Echo speakers. Meanwhile, ChatGPT creator OpenAI is working on its own AI hardware, while Meta is integrating its AI into smart glasses and Apple is rumored to be working on the same thing.

Privacy concerns remain

Products that continuously record the environment carry significant security and privacy risks. Different companies have varying policies for how voice recordings are processed, stored, and used for AI training.

In its current privacy policy, Bee says users can delete their data at any time and that audio recordings are not saved, stored, or used for AI training. However, the app does store data that the AI learns about the user, which is necessary for the assistant function.

Bee has previously indicated plans to only record voices from people who have verbally given consent. The company is also working on a feature that lets users define boundaries – both based on topic and location – that automatically pause the device’s learning. They also plan to build AI processing directly into the device, which generally involves fewer privacy risks than cloud-based data processing.

However, it’s unclear whether these policies will change when Bee is integrated into Amazon. Amazon has previously had mixed results when it comes to handling user data from customers’ devices.

The company has shared video clips with law enforcement from people’s Ring security cameras without the owner’s consent or court order. Ring also reached a settlement in 2023 with the Federal Trade Commission after allegations that employees and contractors had broad and unrestricted access to customers’ video recordings.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.