Sunday, July 6, 2025

Polaris of Enlightenment

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published 14 June 2025
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Buying someone’s real-time location is shockingly cheap

You need to stop handing out your cell number. Seriously.

Published today 7:54
– By Naomi Brockwell
11 minute read

Most people have no idea how exposed they are.

Your location is one of the most sensitive pieces of personal information, and yet it’s astonishingly easy to access. For just a few dollars, someone can track your real-time location without ever needing to hack your phone.

This isn’t science fiction or a rare edge case. It’s a thriving industry.

Telecom providers themselves have a long and disturbing history of selling customer location data to data brokers, who then resell it with little oversight.

In 2018, The New York Times exposed how major U.S. carriers, including AT&T, Verizon, T-Mobile, and Sprint, were selling access to phone location data. This data was ultimately accessed by bounty hunters and law enforcement, without user consent or a warrant.

A 2019 investigation by Vice showed that you could buy the real-time location of nearly any phone in the U.S. for about $300.

Other vendors advertise this service for as little as $5 on underground forums and encrypted messaging channels. No need to compromise someone’s device, just give them a phone number.

The big takeaway from this article is that if someone has your number, they can get your location. We’re going to go over how to shut this tracking method down.

Whether you’re an activist, journalist, or just someone who values your right to privacy, this newsletter series is designed to give you the tools to disappear from unwanted tracking, one layer at a time.

How cell numbers leak location

Your cell number is a real-time tracking beacon. Every time your phone is powered on, it talks to nearby cell towers. This happens even if you’re not making a call.

Your phone’s location is continuously updated in a database called the Home Location Register (HLR), which lets your carrier know which tower to route calls and texts through. If someone has access to your number, they can locate you, sometimes within meters, in real time. Here are some ways they can do it:

1. Access to telecom infrastructure

Selling data / corrupting employees:

Telecom providers are notorious for selling customers’ location data directly from their HLR. Alternatively, unauthorized individuals or entities can illegally access this data by bribing or corrupting telecom employees who have direct access to the HLR.

The data retrieved from the HLR database reveals only which specific cell tower your phone is currently registered to, and typically identifies your approximate location within tens or hundreds of meters, depending on tower density in the area.

To pinpoint your exact location with greater precision, down to just a few meters, requires additional specialized methods, such as carrier-based triangulation. Triangulation involves actively measuring your phone’s signal strength or timing from multiple cell towers simultaneously. Such detailed, real-time triangulation is typically restricted to telecom companies and authorized law enforcement agencies. However, these advanced methods can also be misused if telecom personnel or authorized entities are compromised through bribery or corruption.

Exploiting the SS7 protocol (telecom network vulnerabilities):

Attackers can also exploit vulnerabilities such as those in SS7, a global telecom signaling protocol, to illicitly request your current cell tower location from the HLR database. SS7 itself doesn’t store any location data — it provides the means to query your carrier’s HLR and retrieve your current tower association.

2. IMSI catchers (“Stingrays”): Your phone directly reveals its location

IMSI catchers (often called “Stingrays”) are specialized surveillance devices acting as fake cell towers. Your phone constantly searches for the strongest available cell signal, automatically connecting to these fake towers if their signals appear stronger than legitimate ones.

In this method, instead of querying telecom databases, your phone directly reveals its own location to whoever is operating the fake cell tower, as soon as the phone connects. Operators of IMSI catchers measure signal strength between your phone and their device, enabling precise location tracking, often accurate within a few meters.

While IMSI catchers were initially developed and primarily used by law enforcement and intelligence agencies, the legality of their use (even by authorities) is subject to ongoing debate. Unauthorized versions of IMSI catchers have also become increasingly available on black and gray markets.

The solution? Move to VoIP

Cell numbers use your phone’s baseband processor to communicate directly with cell towers over the cellular network, continuously updating your physical location in telecom databases.

VoIP numbers (Voice over Internet Protocol), on the other hand, transmit calls and texts through the internet using data connections. They don’t keep HLR records, and so they’re immune to tower-based location tracking.

Instead, the call or message is routed through internet infrastructure and only connects to the cellular network at carrier-level switching stations, removing the direct tower-based tracking of your physical location.

So the takeaway is that you want to stop using cell numbers, and start using VoIP number instead, so that anyone who knows your number isn’t able to use it to track your location.

But there’s a catch: VoIP is heavily regulated. In most countries, quality VoIP options are scarce, and short code SMS support is unreliable. In the US, though, there are good tools.

Action items:

1. Get a VoIP provider

Two good apps that you can download where you can generate VoIP numbers in the U.S. are:

  • MySudo: Great for compartmentalizing identity. Up to 9 identities/numbers per account.
  • Cloaked.com: Great for burner/throwaway numbers.

We are not sponsored by or affiliated with any of the companies mentioned here, they’re just tools I use and like. If you have services that you like and recommend, please let others know in the comments!

Setting up MySudo

Step 1: Install the app

  • You will need a phone with the Google Play Store or the Apple App Store.
  • Search for MySudo, download and install it, or visit the store directly via their webpage.

Step 2: Purchase a plan

  • $15/month gets you up to 9 Sudo profiles, each with its own number. Or you can start with just 1 number for $2/month. You will purchase this plan inside the app store on your phone.

Step 3: Set up your first Sudo profile

When prompted, create your first Sudo profile. Think of this as a separate, compartmentalized identity within MySudo, distinct from your main user account.

Each Sudo profile can include:

  • A dedicated phone number
  • Optional extras like an email alias, username handle, virtual credit card, etc.

For now, we’re focusing only on phone numbers:

  • Choose a purpose for this profile (such as Shopping, Medical, Work). This purpose will appear as a heading in your list of Sudos.
  • Create a name for your Sudo profile (I usually match this to the chosen purpose).

Step 4: Add a phone number to your Sudo

  • Tap the Sudo icon in the top-left corner.
  • Select the Sudo profile you just created.
  • Tap “Add a Phone Number.”
  • Select your preferred country, then enter a city name or area code.
  • Pick a number from the available options, then tap “Choose Number.”

You’re now set up and ready to use your VoIP number!

Step 4: Compartmentalize

You don’t need to assign all 9 numbers right away. But here are helpful categories you might consider:

  • Friends and family
  • Work
  • Government
  • Medical
  • Banking
  • Purchases
  • Anonymous purchases
  • High-risk anonymous use
  • Catch-all / disposable

Incoming calls go through the MySudo app, not your default dialer. Same with SMS. The person on the other end doesn’t know it’s VoIP.

Short codes don’t always work

Short codes (such as verification codes sent by banks or apps) use a special messaging protocol that’s different from regular SMS texts. Many VoIP providers don’t consistently support short codes, because this capability depends entirely on the underlying upstream provider (the entity that originally provisioned these numbers) not on the VoIP reseller you purchased from.

If you encounter problems receiving short codes, here are ways around the issue:

  • Use the “Call Me” option:
    Many services offer an alternative verification method: a phone call delivering the verification code verbally. VoIP numbers handle these incoming verification calls without any issue.
  • Try another VoIP provider (temporary):
    If a service blocks your primary VoIP number and insists on a real cellular number, you can borrow a non‑VoIP SIM verification service like SMSPool.net. They provide actual cell‑based phone numbers via the internet, but note: these are intended for temporary or burner use only. Don’t rely on rented numbers from these services for important or long-term accounts, always use stable, long-term numbers for critical purposes.
  • Register using a real cell number and port it to VoIP:
    For critical accounts, another option is to use a prepaid SIM card temporarily to register your account, then immediately port that number to a VoIP provider (such as MySudo or Google Voice). Many services only check whether a number is cellular or VoIP during initial account registration, and don’t recheck later.
  • Maintain a separate SIM just for critical 2FA:
    If you find that after porting, you still can’t reliably receive certain verification codes (particularly short codes), you might need to maintain a separate, dedicated SIM and cellular number exclusively for receiving critical two-factor authentication (2FA) codes. Do not share this dedicated SIM number with everyone, and do not use it for regular communications.

Important caveat for high-risk users:

Any SIM cards placed into the same phone are linked together by the telecom carrie, which is important information for high-risk threat models. When you insert a SIM card into your device, the SIM itself will independently send special messages called “proactive SIM messages” to your carrier. These proactive messages:

  • Completely bypass your phone’s operating system (OS), making them invisible and undetectable from user-level software.
  • Contain device-specific identifiers such as the IMEI or IMEISV of your phone and also usually include the IMEI of previous devices in which the SIM was inserted.

If your threat model is particularly high-risk and requires total compartmentalization between identities or numbers, always use separate physical devices for each compartmentalized identity. Most people don’t need to take such extreme precautions, as this generally falls outside their threat model.

Cloaked.com for burner numbers

  • Offers unlimited, disposable phone numbers.
  • Great for one-off verifications, restaurants, or merchants.
  • Doesn’t require installing an app, you can just use it in the browser and never link any forwarding number.
  • Be aware that if any of the VoIP numbers you generated inside Cloaked hasn’t received any calls or messages for 60 days, it enters a watch period. After an additional 60 days without receiving calls or messages (120 days total of inactivity), you lose the number, and it returns to the available pool for someone else to use. Only use Cloaked for numbers you expect to actively receive calls or messages on, or for temporary use where losing the number isn’t an issue.

What to do with your current cell number

Your cell number is already everywhere: breached databases, government forms, medical records, and countless other places. You can’t “un-breach” it, and you don’t want to lose that number because it’s probably an important number that people know they can contact you on. But you can stop it from being used to track you.

Solution: Port your existing cell number to a VoIP Provider

Best choice: Google Voice (recommended due to strong security protections)

  • You can choose to just pay a one-time $20 fee, which turns the number into a receiving-only number. You’ll get to receive calls and texts forever on this number with no ongoing fees.
  • Or you can choose to pay an ongoing monthly fee, which will allow you to continue to make outgoing calls and send outgoing messages from the number.

The one-time fee option will be sufficient for most people, because the aim is to gradually make this existing number obsolete and move people over to your new VoIP numbers.

Google Voice is considered a strong option because the threat of SIM swapping (where an attacker fraudulently takes control of your phone number) is very real and dangerous. Unlike basically every other telecom provider, Google lets you secure your account with a hardware security key, making it significantly harder for attackers to port your number away from your control.

Google obviously is not a privacy-respecting company, but remember, your existing cell number isn’t at all private anyway. The idea is to eventually stop using this number completely, while still retaining control of it.

How to port your existing cell number to Google Voice

  1. Check porting eligibility
    Visit the Google Voice porting tool and enter your number to verify it’s eligible.
  2. Start the port-in process
    • Navigate to Settings → Phones tab → Change / Port.
    • Select “I want to use my mobile number” and follow the on-screen prompts
  3. Pay the one-time fee
    A $20 fee is required to port your number into Google Voice
  4. Complete the porting process
    • Enter your carrier account details and submit the request. Porting generally completes within 24–48 hours, though it can take longer in some cases.
  5. Post-port setup
    • Porting your number to Google Voice cancels your old cellular service. You’ll need a new SIM or plan for regular mobile connectivity, but you’ll ideally only use this new SIM for data, and use your VoIP numbers for communication not the associated cell number.
    • Configure call forwarding, voicemail transcription, and text forwarding to email from the Google Voice Settings page.

Now, even if someone tries to look you up via your old number, they can’t get your real-time location. It’s no longer tied to a SIM that is logging your location in HLRs.

Summary: Take it one step at a time

Switching to VoIP numbers is a big change, so take it step by step:

  1. Download your VoIP apps of choice (like MySudo) and set up your new numbers.
  2. Gradually migrate your contacts to your new VoIP numbers.
  3. Use burner numbers (via Cloaked or similar services) for reservations, merchants, or anyone who doesn’t genuinely need your permanent number.

Keep your existing SIM active for now, until you’re comfortable and confident using the new VoIP system.

When ready, finalize your migration:

  1. Port your original cell number to Google Voice.
  2. Get a new SIM card with a fresh number, but don’t use this new number for calls, texts, or identification.
  3. Use the new SIM solely for data connectivity.

This completes your migration, significantly enhancing your privacy and reducing your exposure to location tracking.

GrapheneOS users

You can’t currently purchase your MySudo subscription directly on a GrapheneOS device. Instead, you’ll first need to buy your MySudo plan through the Google Play Store or Apple App Store using another device.

Once you’ve purchased your plan, you can migrate your account to your GrapheneOS phone:

  1. On your GrapheneOS device, download and install MySudo from your preferred app store (I personally like the Aurora store as a front-end for the Google Play Store).
  2. Open MySudo on your GrapheneOS device and navigate to:
    Settings → Backup & Import/Export → Import from Another Device
  3. Follow the on-screen prompts to securely migrate your entire account over to your GrapheneOS phone.

You can retain your original device as a secure backup for messages and account data.

To ensure reliable, real-time notifications for calls and messages, make sure sandboxed Google Play is enabled on the GrapheneOS profile where you’re using MySudo.

What you’ve achieved

You now have:

  • Up to 9 persistent, compartmentalized VoIP numbers via MySudo.
  • Disposable, on-demand burner numbers via Cloaked.
  • Your original cell number safely ported to Google Voice and secured with a hardware security key.
  • A clear plan for transitioning away from your original cell number.

You’ve replaced a vulnerable, easily trackable cell identifier. Your real-time location is no longer constantly broadcast through cell towers via a number that is identified as belonging to you, your digital identities are better compartmentalized, and you’re significantly harder to track or exploit.

This marks the beginning of a safer digital future. What’s next? More layers, better privacy tools, and greater freedom. Remember, privacy isn’t a destination, it’s a lifestyle. You’re now firmly on that path.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Spotify fills playlists with fake music – while CEO invests millions in military AI

The future of AI

Published 1 July 2025
– By Editorial Staff
Spotify CEO Daniel Ek accused of diverting artist royalties to military AI development.
3 minute read

Swedish streaming giant Spotify promotes anonymous pseudo-musicians and computer-generated music to avoid paying royalties to real artists, according to a new book by music journalist Liz Pelly.

Meanwhile, criticism grows against Spotify CEO Daniel Ek, who recently invested over €600 million in a company developing AI technology for future warfare.

In the book Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, Liz Pelly reveals that Spotify has long been running a secret internal program called Perfect Fit Content (PFC). The program creates cheap, generic background music – often called “muzak” – through a network of production companies with ties to Spotify. This music is then placed in Spotify’s popular playlists, often without crediting any real artists.

The program was tested as early as 2010 and is described by Pelly as Spotify’s most profitable strategy since 2017.

“But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which – as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler – the relationship between listener and artist might be severed completely”, Pelly writes.

By 2023, the PFC program controlled hundreds of playlists. More than 150 of them – with names like Deep Focus, Cocktail Jazz, and Morning Stretch – consisted entirely of music produced within PFC.

“Only soulless AI music will remain”

A jazz musician told Pelly that Spotify asked him to create an ambient track for a few hundred dollars as a one-time payment. However, he couldn’t retain the rights to the music. When the track later received millions of plays, he realized he had likely been deceived.

Social media criticism has been harsh. One user writes: “In a few years, only soulless AI music will remain. It’s an easy way to avoid paying royalties to anyone.”

“I deleted Spotify and cancelled my subscription”, comments another.

Spotify has previously faced criticism for similar practices. The Guardian reported in February that the company’s Discovery Mode system allows artists to gain more visibility – but only if they agree to receive 30 percent less payment.

Spotify’s CEO invests in AI for warfare

Meanwhile, CEO Daniel Ek has faced severe criticism for investing over €600 million through his investment firm Prima Materia in the German AI company Helsing. The company develops software for drones, fighter aircraft, submarines, and other military systems.

– The world is being tested in more ways than ever before. That has sped up the timeline. There’s an enormous realisation that it’s really now AI, mass and autonomy that is driving the new battlefield, Ek commented in an interview with Financial Times.

With this investment, Ek has also become chairman of Helsing. The company is working on a project called Centaur, where artificial intelligence will be used to control fighter aircraft.

The criticism was swift. Australian producer Bluescreen explained in an interview with music site Resident Advisor why he chose to leave Spotify – a decision several other music creators have also made.

– War is hell. There’s nothing ethical about it, no matter how you spin it. I also left because it became apparent very quickly that Spotify’s CEO, as all billionaires, only got rich off the exploitation of others.

Competitor chooses different path

Spotify has previously been questioned for its proximity to political power. The company donated $150,000 to Donald Trump’s inauguration fund in 2017 and hosted an exclusive brunch the day before the ceremony.

While Spotify is heavily investing in AI-generated music and voice-controlled DJs, competitor SoundCloud has chosen a different path.

– We do not develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes, explains communications director Marni Greenberg.

– In fact, we implemented technical safeguards, including a ‘no AI’ tag on our site to explicitly prohibit unauthorised use.

FUTO – the obvious choice for privacy-friendly voice and text input on mobile devices

Advertising partnership with Teuton Systems

Ditch Google's input apps and keep what you type and say on your phone.

Published 1 July 2025
3 minute read

In our series about open, surveillance-free apps, we take a closer look at FUTO Voice Input and FUTO Keyboard – two apps that together challenge the established alternatives for voice input and keyboards on mobile devices. Most smartphone users are accustomed to dictating text using Google or using standard keyboards like Gboard or SwiftKey.

However, few consider that these popular tools often collect what you say and write privately, sending it to tech giants. The FUTO team themselves emphasize that their solution completely eliminates this problem – everything runs locally on the device without any data leaving the phone (offline with no connection requirements).

Here’s what the FUTO apps offer:

  • Privacy focus: FUTO apps run completely offline – no data is sent to the cloud.
  • Full functionality: Swipe typing, text suggestions, autocorrection, and voice-to-text with punctuation – everything works without internet connection (all keyboard functions available offline).
  • High precision: Offline dictation using advanced AI model (OpenAI Whisper) provides fast and accurate transcription (local voice recognition with high accuracy).
  • Multilingual support: Support for many languages and continuous improvements via the open-source community.

FUTO Keyboard

On the keyboard front, FUTO Keyboard impresses by delivering modern convenience without compromising privacy. Unlike conventional keyboards that constantly transmit user data, FUTO requires neither network access nor cloud services – yet it offers features on par with the best.

You can swipe words with your finger across the screen, get relevant text suggestions and automatic spell correction, and customize the theme to your liking – all while the app consistently refuses to send a single keystroke to any external server (all data stays with you). FUTO Keyboard also integrates FUTO Voice Input through a built-in microphone button, allowing ‘speech to text’ to be activated from the same interface.

FUTO Voice Input

For voice input, we have FUTO Voice Input that lets you dictate text directly in apps like messages or notes – completely without internet connection. All processing happens locally using a compact language model, meaning no audio needs to be sent away to become text. According to users who have compared it with Google’s cloud-based solution, FUTO can keep pace and even surpass it in both speed and accurate grammar.

An enthusiastic tester reported that FUTO provided a completely new experience – no delays or strange autocorrections that he previously suffered from with Gboard. This means you can safely speak freely and see the text appear almost immediately, without worrying about unauthorized “listening” on the other end.

Ongoing development and alternatives

Despite FUTO Keyboard being young, it’s already surprisingly capable. The interface feels polished and user-friendly, and the amount of features makes it almost comparable to established alternatives. Currently, text input works excellently in English, while support for smaller languages like Swedish is still being refined. However, development pace is high and the team behind FUTO has announced improvements specifically to autocorrection and expanded language support in upcoming updates. Moreover, global collaboration is encouraged: since the source code is open, engaged developers and users can contribute improvements and new language data to the project.

Among free alternatives, there’s Sayboard, an open source keyboard using Vosk for speech recognition. For pure keyboards, there’s AnySoftKeyboard and FlorisBoard, which are excellent from a privacy perspective but lack some of the advanced features that FUTO offers in one package (especially built-in voice input).

An essential part of the Matrix Phone ecosystem

FUTO Voice Input and Keyboard demonstrate that you can combine the best of both worlds: the convenience of smart text and voice functions, and the security of keeping your data private. For users of Teuton Systems’ Matrix Phone (GrapheneOS phone), these apps come pre-installed as part of the privacy-secure ecosystem. But they’re available to everyone – via Google Play or F-Droid – and constitute a highly recommended switch for anyone who values their privacy in everyday life.

As a tech writer recently put it: you no longer need to choose between functionality and security – with FUTO you get both without compromises.

Swedish regional healthcare app run by chatbot makes serious errors

Published 30 June 2025
– By Editorial Staff
In one documented case, the app classified an elderly man's symptoms as mild - he died the following day.
2 minute read

An AI-based healthcare app used by the Gävleborg Regional Healthcare Authority in Sweden is now under scrutiny following serious assessment errors. In one notable case, an elderly man’s condition was classified as mild – he died the following day.

Healthcare staff are raising alarms about deficiencies deemed to threaten patient safety, and the app is internally described as a “disaster”.

Min vård Gävleborg (My Healthcare Gävleborg) is used when residents seek digital healthcare or call 1177 (Sweden’s national healthcare advice line). A chatbot asks questions to make an initial medical assessment and then refers the patient to an appropriate level of care. However, according to several doctors in the region, the system is not functioning safely enough.

In one documented case, the app classified an elderly man’s symptoms as mild. He died the following day. An incident report shows that the prioritization was incorrect, although it couldn’t be established that this directly caused the death.

In another case, an inmate at the Gävle Correctional Facility sought care for breathing difficulties – but was referred to a chat with a doctor in Ljusdal, instead of being sent to the emergency room.

– She should obviously have been sent to the emergency room, says Elisabeth Månsson Rydén, a doctor in Ljusdal and board member of the Swedish Association of General Medicine in Gävleborg, speaking to the tax-funded SVT.

“Completely insane”

Criticism from healthcare staff is extensive. Several doctors warn that the app underestimates serious symptoms, which could have life-threatening consequences. Meanwhile, there are examples of the opposite – where patients are given too high priority – which risks unnecessarily burdening healthcare services and causing delays for severely ill patients.

– Doctors have expressed in our meetings that Min vård Gävleborg is a disaster. This is completely insane, says Månsson Rydén.

Despite the death incident, Region Gävleborg has chosen not to report the event to either the Health and Social Care Inspectorate (IVO) or the Swedish Medical Products Agency.

– We looked at the case and decided it didn’t need to be reported, says Chief Medical Officer Agneta Larsson.

Other regions have reacted

The app was developed by Platform24, a Swedish company whose digital systems are used in several regions. In Västra Götaland Region, the app was paused after a report showed that three out of ten patients were assessed incorrectly. In Region Östergötland, similar deficiencies have led to a report to the Swedish Medical Products Agency. An investigation is ongoing.

Despite this, Agneta Larsson defends the version used in Gävleborg:

– We have reviewed our own system, and we cannot see these errors.

Platform24 has declined to be interviewed, but in a written response to Swedish Television, the company’s Medical Director Stina Perdahl defends the app’s basic principles.

“For patient safety reasons, the assessment is deliberately designed to be a bit more cautious initially”, it is claimed.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.