Saturday, June 14, 2025

Polaris of Enlightenment

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published today 7:43
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Tech company bankrupt – “advanced AI” was 700 Indians

Published today 10:41
– By Editorial Staff
“AI washing” refers to a company exaggerating or lying about their products or services being powered by advanced artificial intelligence in order to attract investors and customers.
2 minute read

An AI company that marketed itself as a technological pioneer – and attracted investments from Microsoft, among others – has gone bankrupt. In the aftermath, it has been revealed that the technology was largely based on human labor, despite promises of advanced artificial intelligence.

Builder.ai, a British startup formerly known as Engineer.ai, claimed that their AI assistant Natasha could build apps as easily as ordering pizza. But as early as 2019, the Wall Street Journal revealed that much of the coding was actually done manually by a total of about 700 programmers in India.

Despite the allegations, Builder.ai secured over $450 million in funding from investors such as Microsoft, Qatar Investment Authority, IFC, and SoftBank’s DeepCore. At its peak, the company was valued at $1.5 billion.

In May 2025, founder and CEO Sachin Dev Duggal stepped down from his position, and when the new management took over, it emerged that the revelations made in 2019 were only the tip of the iceberg. For example, the company had reported revenues of $220 million in 2024, while the actual figures were $55 million. Furthermore, the company is suspected of inflating the figures through circular transactions and fake sales via “third-party resellers”, reports the Financial Times.

Following the new revelations, lenders froze the company’s account, forcing Builder.ai into bankruptcy. The company is now accused of so-called AI washing, which means that a company exaggerates or falsely claims that its products or services are powered by advanced artificial intelligence in order to attract investors and customers.

The company’s heavy promotion of “Natasha” as a revolutionary AI solution turned out to be a facade – behind the deceptive marketing ploy lay traditional, human-driven work and financial irregularities.

Swedish police urge parents to delete chat apps from children’s phones

organized crime

Published yesterday 17:33
– By Editorial Staff
2 minute read

Ahead of the summer holidays, the Swedish police are warning that criminal gangs are using social media to recruit young people into crime. On Facebook, the authorities have published a list of apps that parents should keep a close eye on – or delete immediately.

Critics argue, however, that the list is arbitrary and that it is strange for the police to urge parents to delete apps that are used by Swedish authorities.

During the summer holidays, adults are often less present in young people’s everyday lives, while screen time increases. According to the police, this creates increased vulnerability. Criminal networks then try to recruit young people to handle weapons, sell drugs, or participate in serious violent crimes such as shootings and explosions.

To prevent this, a national information campaign has been launched in collaboration with the County Administrative Board. The police, together with the County Administrative Board, have compiled a list of mobile apps that they believe pose a significant risk:

  • Delete immediately: Signal, Telegram, Wickr Me
  • Keep control over: Snapchat, WhatsApp, Discord, Messenger
  • Monitor closely: TikTok, Instagram

Digital parental presence

Maja Karlsson, municipal police officer in Jönköping, also emphasizes the importance of digital parental presence:

We need to increase digital control and knowledge about which apps my child is using, who they are in contact with, and why they have downloaded different types of communication apps.

The police recommend that parents talk openly with their children about what they do online and use technical aids such as parental controls.

– There are tools available for parents who find it difficult. It’s not impossible, help is available, Karlsson continues.

Parents are also encouraged to establish fixed routines for their children and ensure they have access to meaningful summer activities.

“Complete madness”

However, the list has been met with harsh criticism from several quarters. Users point out that the Signal app is also used by the Swedish Armed Forces and question why the police list it as dangerous.

If general apps like Signal are considered dangerous, the phone app and text messaging should be first on the list”, writes another user.

Critics argue that it is not the apps themselves but how they are used that is crucial, and find it remarkable that the police are arbitrarily and without deeper justification telling parents which messaging apps are okay to use and which are not.

Complete madness to recommend uninstalling chat apps so broadly. You should know better”, comments another upset reader.

Organic Maps – the map app that doesn’t map you

Advertising partnership with Teuton Systems

Tired of Google Maps tracking you? Here's the free alternative that lets you navigate completely offline!

Published 12 June 2025
Organic Maps allow you to navigate completely offline when you have poor coverage or are hiking in the wilderness, for example.
4 minute read

In our series on open, surveillance-free apps, we take a closer look at Organic Maps – a map app that stands out as a privacy-friendly alternative to Google Maps. For many smartphone users, Google Maps has become the standard for navigation, but that convenience comes at a price: extensive collection of location data and dependence on a constant internet connection. Organic Maps is a free, open-source app (FOSS) that takes a completely different approach. Here, you can navigate without being tracked and without being tied to an internet connection.

Unlike Google Maps, which is neither open source nor particularly privacy-friendly, Organic Maps is built on open source and created by a community. The source code is openly available, which means that independent developers can review and improve the app. Most importantly, Organic Maps does not contain any tracker features – it does not collect your personal information or location data at all.

The app also has no ads or hidden data collection services running in the background. You don’t need to log in or give away any information – privacy is a core principle. Thanks to the open code, users can trust that there are no ulterior motives; it’s all about providing maps and navigation, nothing else.

Works completely offline – everywhere

One of the biggest advantages of Organic Maps is that the app works completely offline. All map data is based on the community project OpenStreetMap, which covers the entire world. You choose which maps (countries or regions) you want to download to your phone, and then you can navigate freely without the internet. Unlike Google and Apple Maps – whose offline features are very limited and lack full search or navigation functionality outside of the network – Organic Maps offers 100% of its features without a connection.

Searching for addresses and places, viewing points of interest, and turn-by-turn voice guidance work just as well offline as online. This means you can use the app in airplane mode, abroad without roaming, or far out in the wilderness.

Sample screenshots from Organic Maps: An offline map of some nature reserves, navigation in night mode, menu for downloading maps, and menu for map layers.

Since Organic Maps is based on OpenStreetMap, you also get very detailed maps. The community updates the maps continuously with everything from new bike paths to small forest trails. For example, a technology writer noted that he has yet to encounter a hiking trail that is missing from Organic Maps’ maps – often there is information that large map services miss. This makes the app particularly popular among outdoor enthusiasts, but everyone benefits: even regular roads, addresses, and points of interest are extensively covered thanks to OpenStreetMap. In short, the offline map gives you the peace of mind that the map is always available, no matter where you are.

Battery-efficient navigation

Offline navigation not only gives you freedom from the mobile network – it also saves battery power. Organic Maps is remarkably energy efficient and uses minimal power compared to many other navigation services. Without constant data traffic, background tracking, or heavy advertising, the app can focus on what it’s supposed to do and nothing more. One reviewer says he used the app during several days of hiking without having to charge his phone.

The developers themselves claim that you can go on a week-long trip on a single charge with Organic Maps as your guide. For those who travel frequently or are simply tired of GPS draining their battery, this is a game-changer. Its energy efficiency also makes Organic Maps well suited for older or simpler smartphones that may have weaker batteries – the app is lightweight and resource-efficient.

Available for Android and iPhone

Despite its different philosophy, Organic Maps is as easy to get and use as any popular app. The app is available to download for free for both Android and iOS – you can find it in the Google Play Store and Apple’s App Store. For those who use completely Google-free phones (such as GrapheneOS on Matrix mobile), it is also available through alternative open app stores such as F-Droid.

The interface is intuitive and similar to other map apps, so the barrier to switching is low. You can search for addresses or businesses, bookmark your favorite places, and get turn-by-turn voice directions. All these features are available offline after you download the maps for the area you need. In short, you get a full-featured map service on your phone – but without the surveillance.

Pre-installed on the Matrix phone

Organic Maps has become a staple in privacy-focused circles. Teuton Systems pre-installs the app on its Matrix phone – a security-focused Android smartphone based on GrapheneOS – as part of a Google-free ecosystem. This gives users a map service that respects their privacy right from the start. But even if you don’t own a Matrix mobile phone, you can still easily enjoy the benefits. Replacing Google Maps with Organic Maps on your current phone is a step towards a more privacy-secure everyday life, without losing any functionality. The app is completely free and open for everyone to try.

Organic Maps exemplifies how free and open software can give us, the average user, more control. You don’t have to worry about being tracked when you look up an address or navigate to a destination, and you can trust that the app only does what it says it does. The combination of open source code, offline capability, and top-notch privacy has earned the app excellent recommendations in tech media.

For those who value their privacy – or just want a reliable map app that works everywhere – Organic Maps is an inspiring alternative that shows it’s possible to navigate freely without giving up your privacy!

 

Features of Organic Maps

The ultimate app for travelers, tourists, hikers and cyclists:

  • Detailed offline maps with locations not found on other maps, thanks to OpenStreetMap
  • Bike paths, hiking trails and walking routes
  • Contour lines, elevation profiles, peaks and slopes
  • Turn-by-turn navigation for walking, cycling and car navigation with voice guidance, Android Auto
  • Quick offline map search
  • Export and import bookmarks in KML/KMZ format, import GPX
  • Dark mode to protect your eyes
  • Countries and regions do not take up much space
  • Free and open source

Macron seeks to ban children from social media

Internet censorship

Published 12 June 2025
– By Editorial Staff
While most people agree that children need to be protected online, many worry about arbitrary censorship and lack of legal certainty.
3 minute read

French President Emmanuel Macron wants to ban social media for children under the age of 15. At the same time, the European Commission has stated that such decisions are a national matter.

Macron advocates an EU-wide age verification system, but the Commission believes that responsibility lies with individual member states.

The president’s statement came late on Tuesday in response to a tragic knife attack in a Paris suburb where a teacher’s assistant was stabbed to death by a 14-year-old student.

Macron, who has previously advocated a ban on social media for younger users, now raised the tone further and called on the EU and its member states to act quickly.

– I’m giving us a few months to achieve European mobilization. Otherwise, I will negotiate with the Europeans so that we can do it ourselves in France, said the president.

However, the EU Commission’s response was clear: it is up to the French authorities to decide on the issue.

– Let’s be clear… wide social media ban is not what the European Commission is doing. It’s not where we are heading to. Why? Because this is the prerogative of our member states, Commission spokesman Thomas Regnier told reporters yesterday.

Big problem in Denmark

According to the EU’s General Data Protection Regulation (GDPR), member states have the right to set their own minimum age for when social media platforms can process personal data, as long as it is above 13 years.

The GDPR is an EU law that regulates the handling of personal data and allows for national adaptations for example, data may be processed for younger users if their parents give their consent.

– Of course, member states can go for that option, Regnier continued.

But introducing such a ban is easier said than done. Technical challenges make it difficult to verify users’ ages. In Denmark, for example, almost half of all children under the age of ten already have social media accounts. By the age of 13, almost everyone is registered, according to the country’s Minister for Digitalization, Caroline Stage Olsen.

Digital Services Act

In addition to the GDPR, the DSA (Digital Services Act) also plays an important role. The DSA is an EU law that regulates digital services and platforms and gives the Commission responsibility and powers to supervise large social media platforms. The law also requires that minors be protected online.

– We want to make the digital space safe but also need to tackle risks coming from it. This is where the DSA comes into place, Regnier claimed.

The Commission is currently working on EU-wide guidelines on how platforms should comply with the DSA on issues relating to the protection of minors. These guidelines are expected to be finalised before the summer break. At the same time, an age verification app is being developed and will be tested in five countries, including France.

Risk of censorship

Despite ongoing initiatives, France and several other EU countries have expressed frustration with the Commission’s pace of work. Denmark, which takes over the presidency of the EU Council of Ministers from July to December, plans to push for better protection for minors online in the coming months.

Although the Digital Services Act is praised by its proponents, the law has also been criticized for threatening the rule of law and freedom of expression. Critics warn that the DSA, which requires the rapid removal of illegal content, risks leading to arbitrary censorship and overblocking, where platforms delete even legal material for fear of sanctions.

There are also concerns that the rules could be abused to silence opposition and political dissent and that protecting children is not really the issue at stake. Since legal review often takes place after the fact, the protection of fundamental rights is also being called into question.

Why does France want to ban social media for children?

French officials raised several reasons for a ban for children under 15:

  • Mental health: Concerns about increasing mental health problems among young people, linked to the impact of social media on self-esteem, sleep and concentration.
  • Bullying and harassment: Social media is often used as a platform for cyberbullying, which hits children particularly hard.
  • Exposure to harmful content: Children are at risk of being exposed to violent, sexual or extreme content without being able to handle it.
  • Data protection and privacy: Children's personal data is handled by commercial platforms without sufficient control or understanding.
  • School-related violence: The recent knife attack at a school was used as an example of how digital environments can contribute to radicalization or aggressive behaviour.
  • Parental responsibility and control: Macron says the current system makes it difficult for parents to know what their children are doing online.

.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.