Saturday, October 4, 2025

Polaris of Enlightenment

Your doctor’s visit isn’t private

Published 26 July 2025
– By Naomi Brockwell
6 minute read

A member of our NBTV members’ chat recently shared something with us after a visit to her doctor.

She’d just gotten back from an appointment and felt really shaken up. Not because of a diagnosis, she was shaken because she realized just how little control she had over her personal information.

It started right at check-in, before she’d even seen the doctor.
Weight. Height. Blood pressure. Lifestyle habits. Do you drink alcohol? Are you depressed? Are you sexually active?
All the usual intake questions.

It all felt deeply personal, but this kind of data collection is normal now.
Yet she couldn’t help but wonder: shouldn’t they ask why she’s there first? How can they know what information is actually relevant without knowing the reason for the visit? Why collect everything upfront, without context?

She answered every question anyway. Because pushing back makes people uncomfortable.

Finally, she was through with the medical assistant’s questions and taken to the actual doctor. That’s when she confided something personal, something she felt was important for the doctor to know, but made a simple request:

“Please don’t record that in my file”.

The doctor responded:

“Well, this is something I need to know”.

She replied:

“Yes, that’s why I told you. But I don’t want it written down. That file gets shared with who knows how many people”.

The doctor paused, then said:

“I’m going to write it in anyway”.

And just like that, her sensitive information, something she explicitly asked to keep off the record, became part of a permanent digital file.

That quiet moment said everything. Not just about one doctor, but about a system that no longer treats medical information as something you control. Because once something is entered into your electronic health record, it’s out of your hands.

You can’t delete it.

You can’t restrict who sees it.

She Said “Don’t Write That Down.” The Doctor Did Anyway.

Financially incentivized to collect your data

The digital device that the medical assistant and doctor write your information into is called an Electronic Health Record (EHR). EHRs aren’t just a digital version of your paper file. They’re part of a government-mandated system. Through legislation and financial incentives from the HHS, clinics and hospitals were required to digitize patient data.

On top of that, medical providers are required to prove what’s called “Meaningful Use” of these EHR systems. Unless they can prove meaningful use, the medical provider won’t get their Medicare and Medicaid rebates. So when you’re asked about your blood pressure, your weight, and your alcohol use, it’s part of a quota. There’s a financial incentive to collect your data, even if it’s not directly related to your care. These financial incentives reward over-collection and over-documentation. There are no incentives for respecting your boundaries.

You’re not just talking to your doctor. You’re talking to the system

Most people have no idea how medical records actually work in the US They assume that what they tell a doctor stays between the two of them.

That’s not how it works.

In the United States, HIPAA states that your personally identifiable medical data can be shared, without needing to get your permission first, for a wide range of “healthcare operations” purposes.

Sounds innocuous enough. But the definition of health care operations is almost 400 words long. It’s essentially a list of about 65 non-clinical business activities that have nothing to do with your medical treatment whatsoever.

That includes not just hospitals, pharmacy systems, and insurance companies, but billing contractors, analytics firms, and all kinds of third-party vendors. According to a 2010 Department of Health and Human Services (HHS) regulation, there are more than 2.2 million entities (covered entities and business associates) with which your personally identifiable, sensitive medical information can be shared, if those who hold it choose to share it. This number doesn’t even include government entities with access to your data, because they aren’t considered covered entities or business associates.

Your data doesn’t stay in the clinic. It gets passed upstream, without your knowledge and without needing your consent. No one needs to notify you when your data is shared. And you’re not allowed to opt out. You can’t even get a list of everyone it’s been shared with. It’s just… out there.

The doctor may think they’re just “adding it to your chart”. But what they’re actually doing is feeding a giant, invisible machine that exists far beyond that exam room.

We have an entire video diving into the details if you’re interested: You Have No Medical Privacy

Data breaches

Legal sharing isn’t the only risk of this accumulated data. What about data breaches? This part is almost worse.

Healthcare systems are one of the top targets for ransomware attacks. That’s because the data they hold is extremely valuable. Full names, birth dates, Social Security numbers, medical histories, and billing information, all in one place.

It’s hard to find a major health system that hasn’t been breached. In fact, a 2023 report found that over 90% of healthcare organizations surveyed had experienced a data breach in the past three years.

That means if you’ve been to the doctor in the last few years, there’s a very real chance that some part of your medical file is already floating around, whether on the dark web, in a leaked ransomware dump, or being sold to data brokers.

The consequences aren’t just theoretical. In one high-profile case of such a healthcare breach, people took their own lives after private details from their medical files were leaked online.

So when your doctor says, “This is just for your chart,” understand what that really means. You’re not just trusting your doctor. You’re trusting a system that has a track record of failing to protect you.

What happens when trust breaks

Once you start becoming aware of how your data is being collected and shared, you see it everywhere. And in high-stakes moments, like a medical visit, pushing back is hard. You’re at your most vulnerable. And the power imbalance becomes really obvious.

So what do patients do when they feel that their trust has been violated? They start holding back. They say less. They censor themselves.

This is exactly the opposite of what should happen in a healthcare setting. Your relationship with your doctor is supposed to be built on trust. But when you tell your doctor something in confidence, and they say, “I’m going to log it anyway,” that trust is gone.

The problem here isn’t just one doctor. From their perspective, they’re doing what’s expected of them. The entire system is designed to prioritize documentation and compliance over patient privacy.

Privacy is about consent, not secrecy

But privacy matters. And not because you have something to hide. You might want your doctor to have full access to everything. That’s fine. But the point is, you should be the one making that call.

Right now, that choice is being stripped away by systems and policies that normalize forced disclosure.

We’re being told our preferences don’t matter. That our data isn’t worth protecting. And we’re being conditioned to stay quiet about it.

That has to change.

So what can you do?

First and foremost, if you’re in a high-stakes medical situation, focus on getting the care you need. Don’t let privacy concerns keep you from getting help.

But when you do have space to step back and ask questions, do it. That’s where change begins.

  • Ask what data is necessary and why.
  • Say no when something feels intrusive.
  • Let your provider know that you care about how your data is handled.
  • Support policy efforts that restore informed consent in healthcare.
  • Share your story, because this isn’t just happening to one person.

The more people push back, the harder it becomes for the system to ignore us.

You should be able to go to the doctor and share what’s relevant, without wondering who’s going to have access to that information later.

The exam room should feel safe. Right now, it doesn’t.

Healthcare is in urgent need of a privacy overhaul. Let’s make that happen.

 

Yours In Privacy,
Naomi

 

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Your battery life reveals more than you think

Published today 8:21
– By Naomi Brockwell
5 minute read

I’ve been running a little experiment the past 10 days.
I carried two phones everywhere: my Google Fi device and my GrapheneOS device.

Every night, here’s how the batteries compared:
• Google Fi: about 5% left
• GrapheneOS: about 50–75% left

What’s going on here? Am I really using the Google Fi phone 2–4x more?

Actually it’s the opposite.
My GrapheneOS phone is my daily driver. That’s where I use Signal, Brave, podcasts, audiobooks, email, camera, notes, calendar, my language app, and other things.

Meanwhile, on my Google Fi phone, I’ve installed exactly two apps: Signal and Google Maps, and I also use it as an internet hotspot. I deleted as many preinstalled apps as I could without breaking the phone, but there are countless ones I can’t remove.

At first glance you might think the hotspot is what’s draining the battery. That’s certainly a factor, but for context I turn the device to airplane mode (and shutting off the hotspot) whenever I’m not using it.

Even with “aggressive battery saver” enabled and hours in airplane mode, the Google phone churned through its battery like crazy.

The fact that the Google phone’s battery still dies so quickly is revealing. Battery drain can actually be a useful indicator of how private your device is. Some of this comes down to deliberate privacy choices, and some of it comes from the inherent design of each operating system.

Why battery drain is a privacy clue

Battery life is a rough but useful proxy for what’s happening under the hood.
If your phone is dead by dinnertime even when you barely use it, something else is doing the work. And “something else” usually means:
• Background services constantly phoning home
• Analytics trackers collecting usage data
• System-level apps pinging servers even when you think they’re off
• Push notification frameworks that keep connections alive 24/7

That invisible activity not only kills your battery, it shows how much your phone is reporting back without your consent.

Your privacy choices also matter

The way I use my devices also makes a huge impact on how much background activity is happening.

On Graphene, I silo apps across six profiles. My main profile has all the functionality I mentioned before. And I’m constantly using the device, but a lot of what I do doesn’t require connectivity. I can take pictures, listen to music, write notes, and listen to audiobooks all without needing to be online.

When I want to check messages, email, or browse the internet, I simply turn WiFi on, and when I’m done I turn it off again (like turning off a light switch when I leave a room).

I also have other apps I rarely use, some of which are more privacy-invasive, like Uber or others that require sandboxed Google Play Services. These are kept in secondary profiles, and when those profiles are inactive, they’re effectively powered off. This means there’s no chance of these apps running in the background.

Meanwhile, on the Google Fi phone, even though I tried to delete as much bloatware as possible, there are countless apps I can’t uninstall and processes I can’t turn off.

Google Play Services is the biggest offender: It’s a hugely invasive process with elevated system permissions that is always on. You can think of it as a hidden operating system layered on top of Android, handling push notifications, location data, updates, and telemetry. It’s not optional.

In some cases it can actually make your battery more efficient by centralizing notifications instead of having each app run its own system. But that depends entirely on how you use your device.

For example, I don’t have a ton of apps on that device that need all their processes to be centralized in a single, more efficient system. I just have 2 apps.

And I don’t use notifications at all, which means that the centralization of push notification services isn’t helpful to me. And even if I did use notifications, Signal is capable of handling its own push notifications without Google Play Services. So for my setup, having Play Services constantly pinging servers and running countless background processes is overkill. It makes a data-minimalist setup impossible.

Why GrapheneOS performs differently

Unlike most Android phones, and especially Google Fi, GrapheneOS doesn’t come with bloatware. It doesn’t have the same preinstalled junk running in the background — it’s an incredibly stripped down OS. If you want Google play services you can install it, but it’s sandboxed just like any other app, without elevated permissions. That means it doesn’t get special system access to spy on everything you do like it does on Android.

On top of that, GrapheneOS lets you isolate apps into separate profiles, each with its own encryption key and background permissions. Apps in one profile can’t see or interact with apps in another.

This not only improves security, it massively reduces unnecessary background chatter. Most of the Graphene phone spends its day idle, instead of phoning home.

Background activity = surveillance

This comparison proved to me that even on a pared-down Google phone with limited use, there are countless processes running behind the scenes that I don’t control and don’t need.

And those processes make a huge difference in how fast the battery disappears.

Other phones show the same pattern

I compared my results with others in my travel group. Their iPhones drained quickly too, even with moderate use. Apple is better than Android on privacy, but iPhones are still packed with system services constantly talking to Apple and 3rd party servers. Background iCloud sync, location lookups, telemetry reporting, Siri analytics etc all adds up.

In short: if your phone battery is always gasping for air, it’s because it’s working for someone else.

Battery life is a window into privacy. If your phone is constantly trying to talk to servers you didn’t ask it to, it’s both:

  1. Bad for your battery
  2. Bad for your privacy

Why this matters

When I travel, I want peace of mind that my phone won’t die halfway through the day. But even more than that, I want confidence that it isn’t secretly working for someone else.

I don’t pretend to know every technical reason that Google Fi and Apple drain so fast, but I do know that I have far less control over their processes than I do on Graphene. On Graphene, I can granularly control which apps access the internet, I can eliminate Google Play Services entirely, I can block apps from accessing sensors they don’t need. I can essentially be a data minimalist, while still having all the connectivity I want on the go.

And the difference in performance is stark. My Graphene phone lasts all day, even with heavy use. It’s calm, efficient, and private. The others are invasive, have hidden connections, and more background processes.

Battery life and privacy are more connected that we might realize, and GrapheneOS is winning on both. It’s another reason why switching to Graphene was one of my favorite privacy choices I’ve ever made.

Check out our video here if you’d like to learn how to install it:

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Elon Musk plans Wikipedia rival – building encyclopedia with AI

Published yesterday 11:08
– By Editorial Staff
Musk has long criticized Wikipedia for being extremely politically correct and urged people to stop donating to the encyclopedia.
2 minute read

Tech billionaire Elon Musk has announced plans to launch Grokipedia, an AI-based encyclopedia that will compete with and according to Musk be a “massive improvement” over Wikipedia. The project builds on his xAI chatbot Grok.

Musk announced the plans on X on Tuesday. Grokipedia will be built using his AI chatbot Grok, which was developed as an alternative to ChatGPT and trained on web data, including public tweets.

In a podcast earlier this month, Musk described how the technology will work.

— Grok is using heavy amounts of inference compute to look at, as an example, a Wikipedia page, what is true, partially true, or false, or missing in this page.

— Now rewrite the page to correct, remove the falsehoods, correct the half-truths, and add the missing context.

Musk has long criticized Wikipedia for being extremely politically correct and urged people to stop donating to the encyclopedia.

Critics often accuse the site of having transformed into a political weapon with a strong left-liberal bias. Conservative and nationalist perspectives are deliberately portrayed as extreme and dangerous, while left-wing and liberal positions are presented as positive or objective facts.

Grokipedia is expected to attract an audience among Musk’s followers and others who agree that Wikipedia has transformed into a politically biased propaganda tool rather than a neutral reference source.

Wikipedia – a propaganda weapon?

In an interview with Tucker Carlson, Wikipedia co-founder Larry Sanger recently launched a harsh attack on what his creation has become.

— Wikipedia became a weapon of ideological theological war, used to destroy its enemies, Sanger stated in the interview published on X.

He described how the encyclopedia he founded in 2001 together with Jimmy Wales to bring together people with different perspectives has now become a propaganda tool.

— The left has its march through the institutions. And when Wikipedia appeared, it was one of the institutions that they marched through, Sanger explained.

Controlled by anonymous editors

He also criticized the fact that the most powerful editors are anonymous, that conservative sources are blacklisted and that intelligence services have been involved in editing content on Wikipedia.

— We don’t know who they are. They can libel people with impunity, because they’re anonymous, Sanger said about the anonymous editors.

Wikipedia has encountered internal conflicts among editors about how certain events should be presented. The site is the seventh most visited website in the world. When Grokipedia will be launched has not yet been announced.

Austrian armed forces switch to open source

Digital freedom

Published 1 October 2025
– By Editorial Staff
Austrian soldiers during an alpine exercise.
2 minute read

After an extensive planning process that began in 2020, the Austrian armed forces have now transitioned from Microsoft Office to the open source-based LibreOffice across all 16,000 workstations. The decision was not based on economic considerations but on a pursuit of increased digital sovereignty and independence from external cloud services.

The transition to LibreOffice is the result of a long-term strategy that began five years ago, when it became clear that Microsoft would move its office suite to cloud-based solutions. For an organization like the Austrian armed forces, where security around data handling is of the highest priority, this was a decisive turning point, writes Heise Online.

It was very important for us to show that we are doing this primarily to strengthen our digital sovereignty, to maintain our independence in terms of ICT infrastructure and to ensure that data is only processed in-house, explains Michael Hillebrand from the armed forces’ Directive 6 for ICT and cybersecurity in an interview with Austrian radio station Ö1.

Long-term planning and in-house development

The decision process began in 2020 and was completed the following year. During 2022, detailed planning commenced in parallel with training internal developers to be able to implement improvements and complementary software development. Already then, employees were given the opportunity to voluntarily start using LibreOffice.

In 2023, the project gained further momentum when a German company was hired for external support and development. At the same time, internal e-learning in LibreOffice was introduced, and the software became mandatory within the first departments.

Contributing to the global user base

The armed forces’ commitment to open source is not merely consuming. The adaptations and improvements required for military purposes have been programmed and integrated into the LibreOffice project. So far, over five person-years of work have been financed for this effort – contributions that all LibreOffice users worldwide can benefit from.

We are not doing this to save money, Hillebrand emphasizes to ORF (Austrian Broadcasting Corporation). — We are doing this so that the Armed Forces as an organization, which is there to function when everything else is down, can continue to have products that work within our sphere of influence.

In early September, Hillebrand together with his colleague Nikolaus Stocker presented the transition process at LibreOffice Conference 2025.

Extract of the features that the Austrian armed forces programmed for their own use and then contributed to the LibreOffice project. Image: Bundesheer/heise online

From Microsoft dependency to own control

The starting point in 2021 was Microsoft Office 2016 Professional with a large number of VBA and Access solutions deeply embedded in IT workflows. At the same time, the armed forces were already using their own Linux servers with Samba for email and collaboration solutions, rather than Microsoft’s alternatives.

This year, MS Office 2016 has been removed from all military computers. Those who still believe they need Microsoft Office for their duties can, however, apply internally to have the corresponding module from MS Office 2024 LTSC installed.

The transition underscores a growing trend among European government agencies to prioritize digital independence and control over sensitive information over the convenience of commercial cloud services.

Anthropic challenges Google and OpenAI with new AI flagship model

The future of AI

Published 30 September 2025
– By Editorial Staff
AI companies' race continues at a rapid pace, now with a new model from Anthropic.
2 minute read

AI company Anthropic launches Claude Sonnet 4.5, described as the company’s most advanced AI system to date and market-leading for programming. According to the company, the model performs better than competitors from Google and OpenAI.

Anthropic has released its new flagship model Claude Sonnet 4.5, which the company claims is the best on the market for coding. According to reports, the model outperforms both Google’s Gemini 2.5 Pro and OpenAI’s GPT-5 on several coding benchmarks, writes TechCrunch.

One of the most remarkable features is the model’s ability to work independently for extended periods. During early testing with enterprise customers, Claude Sonnet 4.5 has been observed coding autonomously for up to 30 hours. During these work sessions, the AI model has not only built applications but also set up database services, purchased domain names, and conducted security audits.

Focus on safety and reliability

Anthropic emphasizes that Claude Sonnet 4.5 is also their safest model to date, with enhanced protection against manipulation and barriers against harmful content. The company states that the model can create “production-ready” applications rather than just prototypes, representing a step forward in reliability.

The model is available via the Claude API and in the Claude chatbot. Pricing for developers is set at 3 dollars per million input tokens and 15 dollars per million output tokens.

Fast pace in the AI race

The launch comes less than two months after the company’s previous flagship model, Claude Opus 4.1. This rapid development pace illustrates, according to TechCrunch, how difficult it is for AI companies to maintain an advantage in the intense competition.

Anthropic’s models have become popular among developers, and major tech companies like Apple and Meta are reported to use Claude internally.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.