Sunday, August 24, 2025

Polaris of Enlightenment

Your therapist, your doctor, your insurance plan – now in Google’s ad system

Blue Shield exposed 4.7 million patients’ private health info to Google. Your most private information may now be fueling ads, pricing decisions, and phishing scams.

Published 10 May 2025
– By Naomi Brockwell
5 minute read

The healthcare sector is one of the biggest targets for cyberattacks—and it’s only getting worse.

Every breach spills sensitive information—names, medical histories, insurance details, even Social Security numbers. But this time, it wasn’t hackers breaking down the doors.

It was Blue Shield of California leaving the front gate wide open.

Between April 2021 and January 2024, Blue Shield exposed the health data of 4.7 million members by misconfiguring Google Analytics on its websites. That’s right—your protected health information was quietly piped to Google’s advertising systems.

Let’s break down what was shared:

  • Your insurance plan name and group number
  • Your ZIP code, gender, family size
  • Patient names, financial responsibility, and medical claim service dates
  • “Find a Doctor” searches—including provider names and types
  • Internal Blue Shield account identifiers

They didn’t just leak names. They leaked context. The kind of data that paints a detailed picture of your life.

And what’s worse—most people have become so numb to these data breaches that the most common response is “Why should I care?”

Let’s break it down.

1. Health data is deeply personal

This isn’t just a password or an email leak. This is your health. Your body. Your medical history. Maybe your therapist. Maybe a cancer screening. Maybe reproductive care.

This is the kind of stuff people don’t even tell their closest friends. Now imagine it flowing into a global ad system run by one of the biggest surveillance companies on earth.

Once shared, you don’t get to reel it back in. That vulnerability sticks.

2. Your family’s privacy is at risk—even if it was your data

Health information doesn’t exist in a vacuum. A diagnosis on your record might reveal a hereditary condition your children could carry. A test result might imply something about your partner. An STD might not just be your business.

This breach isn’t just about people directly listed on your health plan—it’s about your entire household being exposed by association. When sensitive medical data is shared without consent, it compromises more than your own privacy. It compromises your family’s.

3. Your insurance rates could be affected—without your knowledge

Health insurers already buy data from brokers to assess risk profiles. They don’t need your full medical chart to make decisions—they just need signals: a recent claim, a high-cost provider, a chronic condition inferred from your search history or purchases.

Leaks like this feed that ecosystem.

Even if the data is incomplete or inaccurate, it can still be used to justify higher premiums—or deny you coverage entirely. And good luck challenging that decision. The burden of proof rarely falls on the companies profiling you. It falls on you.

4. Leaked health data fuels exploitative advertising

When companies know which providers you’ve visited, which symptoms you searched, or what procedures you recently underwent, it gives advertisers a disturbingly precise psychological profile.

This kind of data isn’t used to help you—it’s used to sell to you.
You might start seeing ads for drugs, miracle cures, or dubious treatments. You may be targeted with fear-based campaigns designed to exploit your pain, anxiety, or uncertainty. And it can all feel eerily personal—because it is.

This is surveillance operating in a very predatory form. In recent years, the FTC has cracked down on companies like BetterHelp and GoodRx for leaking health data to Facebook and Google to power advertising algorithms.

This breach could be yet another entry in the growing pattern of companies exploiting your data to target you.

5. It’s a goldmine for hackers running spear phishing campaigns

Hackers don’t need much to trick you into clicking a malicious link. But when they know:

  • Your doctor’s name
  • The date you received care
  • How much you owed
  • Your exact insurance plan and member ID

…it becomes trivially easy to impersonate your provider or insurance company.

You get a message that looks official. It references a real event in your life. You click. You log in. You enter your bank info.
And your accounts are drained before you even realize what happened.

6. You can’t predict how this data will be used—and that’s the problem

We tend to underestimate the power of data until it’s too late. It feels abstract. It doesn’t hurt.

But data accumulates. It’s cross-referenced. Sold. Repackaged. Used in ways you’ll never be told—until you’re denied a loan, nudged during an election, or flagged as a potential problem.

The point isn’t to predict every worst-case scenario. It’s that you shouldn’t have to. You should have the right to withhold your data in the first place.

Takeaways

The threat isn’t always a hacker in a hoodie. Sometimes it’s a quiet decision in a California boardroom that compromises millions of people at once.

We don’t get to choose when our data becomes dangerous. That choice is often made for us—by corporations we didn’t elect, using systems we can’t inspect, in a market that treats our lives as inventory.

But here’s what we can do:

  • Choose tools that don’t monetize your data. Every privacy-respecting service you use sends a signal.
  • Push for legislation that treats data like what it is—power. Demand the right to say no.
  • Educate others. Most people still don’t realize how broken the system is. Be the reason someone starts paying attention.
  • Support organizations building a different future. Privacy won’t win by accident. It takes all of us.

Control over your data is control over your future—and while that control is slipping, we’re not powerless.

We can’t keep waiting for the next breach to “wake people up.” Let this be the one that shifts the tide.

Privacy isn’t about secrecy. It’s about consent. And you never consented to this.

So yes, you should care. Because when your health data is treated as a business asset instead of a human right, no one is safe—unless we fight back.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Wifi signals can identify people with 95 percent accuracy

Mass surveillance

Published 21 August 2025
– By Editorial Staff
2 minute read

Italian researchers have developed a technique that can track and identify individuals by analyzing how wifi signals reflect off human bodies. The method works even when people change clothes and can be used for surveillance.

Researchers at La Sapienza University in Rome have developed a new method for identifying and tracking people using wifi signals. The technique, which the researchers call “WhoFi”, can recognize people with an accuracy rate of up to 95 percent, reports Sweclockers.

The method is based on the fact that wifi signals reflect and refract in different ways when they hit human bodies. By analyzing these reflection patterns using machine learning and artificial neural networks, researchers can create unique “fingerprints” for each individual.

Works despite clothing changes

Experiments show that these digital fingerprints are stable enough to identify people even when they change clothes or carry backpacks. The average recognition rate is 88 percent, which researchers say is comparable to other automatic identification methods.

The research results were published in mid-July and describe how the technology could be used in surveillance contexts. According to the researchers, WhoFi can solve the problem of re-identifying people who were first observed via a surveillance camera in one location and then need to be found in footage from cameras in other locations.

Can be used for surveillance

The technology opens up new possibilities in security surveillance, but simultaneously raises questions about privacy and personal security. The fact that wifi networks, which are ubiquitous in today’s society, can be used to track people without their knowledge represents a new dimension of digital surveillance.

The researchers present their discovery as a breakthrough in the field of automatic person identification, but do not address the ethical implications that the technology may have for individuals’ privacy.

Danish students build drone that flies and swims

Published 18 August 2025
– By Editorial Staff
2 minute read

Four students at Aalborg University in Denmark have developed a revolutionary drone that seamlessly transitions between air and water. The prototype uses innovative rotor technology that automatically adapts to different environments.

Four students at Aalborg University in Denmark have created something that sounds like science fiction – a drone that can literally fly down into water, swim around and then jump back up into the air to continue flying, reports Tom’s Hardware.

Students Andrei Copaci, Pawel Kowalczyk, Krzysztof Sierocki and Mikolaj Dzwigalo have developed a prototype as their thesis project that demonstrates how future amphibious drones could function. The project has attracted attention from technology media after a demonstration video showed the drone flying over a pool, crashing down into the water, navigating underwater and then taking off into the air again.

Intelligent rotor technology solves the challenge

The secret behind the impressive performance lies in what the team calls a “variable rotor system”. The individual rotor blades can automatically adjust their pitch angle depending on whether the drone is in air or water.

When the drone flies through the air, the rotor blades work at a higher angle for optimal lift capacity. Underwater, the blade pitch is lowered to reduce resistance and improve efficiency during navigation. The system can also reverse thrust to increase maneuverability when the drone moves through tight passages underwater.

Most components in the prototype have been manufactured by the students themselves using 3D printers, since equivalent parts were not available on the market.

Although the project is still in an early concept stage and exists only as a single prototype, it demonstrates the possibilities for future amphibious vehicles. The technology could have applications in everything from rescue operations to environmental monitoring where vehicles need to move both above and below the water surface.

What I learnt at DEFCON

Why hacker culture is essential if we want to win the privacy war.

Published 16 August 2025
– By Naomi Brockwell
6 minute read

DEFCON is the world’s largest hacker conference. Every year, tens of thousands of people gather in Las Vegas to share research, run workshops, compete in capture-the-flag tournaments, and break things for sport. It’s a subculture. A testing ground. A place where some of the best minds in security and privacy come together not just to learn, but to uncover what’s being hidden from the rest of us. It’s where curiosity runs wild.

But to really get DEFCON, you have to understand the people.

What is a hacker?

I love hacker conferences because of the people. Hackers are notoriously seen as dangerous. The stereotype is that they wear black hoodies and Guy Fawkes masks.

But that’s not why they’re dangerous: They’re dangerous because they ask questions and have relentless curiosity.

Hackers have a deep-seated drive to learn how things work, not just at the surface, but down to their core.

They aren’t content with simply using tech. They want to open it up, examine it, and see the hidden gears turning underneath.

A hacker sees a device and doesn’t just ask, “What does it do?”
They ask, “What else could it do?”
“What isn’t it telling me?”
“What’s under the hood, and why does no one want me to look?”

They’re curious enough to pull back curtains others want to remain closed.

They reject blind compliance and test boundaries.
When society says “Do this,” hackers ask “Why?”

They don’t need a rulebook or external approval.
They trust their own instincts and intelligence.
They’re guided by internal principles, not external prescriptions.
They’re not satisfied with the official version. They challenge it.

Because of this, hackers are often at the fringes of society. They’re comfortable with being misunderstood or even vilified. Hackers are unafraid to reveal truths that powerful entities want buried.

But that position outside the mainstream gives them perspective: They see what others miss.

Today, the word “hack” is everywhere:
Hack your productivity.
Hack your workout.
Hack your life.

What it really means is:
Don’t accept the defaults.
Look under the surface.
Find a better way.

That’s what makes hacker culture powerful.
It produces people who will open the box even when they’re told not to.
People who don’t wait for permission to investigate how the tools we use every day are compromising us.

That insistence on curiosity, noncompliance, and pushing past the surface to see what’s buried underneath is exactly what we need in a world built on hidden systems of control.

We should all aspire to be hackers, especially when it comes to confronting power and surveillance.

Everything is computer

Basically every part of our lives runs on computers now.
Your phone. Your car. Your thermostat. Your TV. Your kid’s toys.
And much of this tech has been quietly and invisibly hijacked for surveillance.

Companies and governments both want your data. And neither want you asking how these data collection systems work.

We’re inside a deeply connected world, built on an opaque infrastructure that is extracting behavioral data at scale.

You have a right to know what’s happening inside the tech you use every day.
Peeking behind the curtain is not a crime. It’s a public service.

In today’s world, the hacker mindset is not just useful. It’s necessary.

Hacker culture in a surveillance world

People who ask questions are a nightmare for those who want to keep you in the dark.
They know how to dig.
They don’t take surveillance claims at face value.
They know how to verify what data is actually being collected.
They don’t trust boilerplate privacy policies or vague legalese.
They reverse-engineer SDKs.
They monitor network traffic.
They intercept outgoing requests and inspect payloads.

And they don’t ask for permission.

That’s what makes hacker culture so important. If we want any hope of reclaiming privacy, we need people with the skills and the willingness to pull apart the systems we’re told not to question.

On top of that, governments and corporations both routinely use outdated and overbroad legislation like the Computer Fraud and Abuse Act (CFAA) to prosecute public-interest researchers who investigate tech. Not because those researchers cause harm, but because they reveal things that others want kept hidden.

Laws like this pressure people towards compliance, and make them afraid to ask questions. The result is that curiosity feels like a liability, and it becomes harder for the average person to understand how the digital systems around us actually work.

That’s why the hacker mindset matters so much: Because no matter how hard the system pushes back, they keep asking questions.

The researchers I met at DEFCON

This year at DEFCON, I met researchers who are doing exactly that.

People uncovering surveillance code embedded in children’s toys.
People doing analysis on facial recognition SDKs.
People testing whether your photo is really deleted after “verification”.
People capturing packets who discovered that the “local only” systems you’re using aren’t local at all, and are sending your data to third parties.
People analyzing “ephemeral” IDs, and finding that your data was being stored and linked back to real identities.

You’ll be hearing from some of them on our channel in the coming months.
Their work is extraordinary, and helping all of us move towards a world of informed consent instead of blind compliance. Without this kind of research, the average person has no way to know what’s happening behind the scenes. We can’t make good decisions about the tech we use if we don’t know what it’s doing.

Make privacy cool again

Making privacy appealing is not just about education.
It’s about making it cool.

Hacker culture has always been at the forefront of turning fringe ideas into mainstream trends. Films like Hackers and The Matrix made hackers a status symbol. Movements like The Crypto Wars (when the government fought Phil Zimmermann over PGP), and the Clipper Chip fights (when they tried to standardize surveillance backdoors across hardware) made cypherpunks and privacy activists aspirational.

Hackers take the things mainstream culture mocks or fears, and make them edgy and cool.

That’s what we need here. We need a cultural transformation and to push back against the shameful language that demands we justify our desire for privacy.

You shouldn’t have to explain why you don’t want to be watched.
You shouldn’t have to defend your decision to protect your communications.

Make privacy a badge of honor.
Make privacy tools a status symbol.
Make the act of encrypting, self-hosting, and masking your identity a signal that says you’re independent, intelligent, and not easily manipulated.

Show that the people who care about privacy are the same people who invent the future.

Most people don’t like being trailblazers, because it’s scary. But if you’re reading this, you’re one of the early adopters, which means you’re already one of the fearless ones.

When you take a stand visibly, you create a quorum and make it safer for others to join in. That’s how movements grow, and we go from being weirdos in the corner to becoming the majority.

If privacy is stigmatized, reclaiming it will take bold, fearless, visible action.
The hacker community is perfectly positioned to lead that charge, and to make it safe for the rest of the world to follow.

When you show up and say, “I care about this,” you give others permission to care too.

Privacy may be on the fringe right now, but that’s where all great movements begin.

Final Thoughts

What I learnt at DEFCON is that curiosity is powerful.
Refusal to comply is powerful.
The simple act of asking questions can be revolutionary.

There are systems all around us extracting data and consolidating control, and most people don’t know how to fight that, and are too scared to try.

Hacker culture is the secret sauce.

Let’s apply this drive to the systems of surveillance.
Let’s investigate the tools we’ve been told to trust.
Let’s explain what’s actually happening.
Let’s give people the knowledge they need to make better choices.

Let’s build a world where curiosity isn’t criminalized but celebrated.

DEFCON reminded me that we don’t need to wait for permission to start doing that.

We can just do things.

So let’s start now.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Facebook’s insidious surveillance: VPN app spied on users

Mass surveillance

Published 9 August 2025
– By Editorial Staff
2 minute read

In 2013, Facebook acquired the Israeli company Onavo for approximately 120 million dollars. Onavo was marketed as a VPN app that would protect users’ data, reduce mobile usage, and secure online activities. Over 33 million people downloaded the app believing it would strengthen their privacy.

In reality, Onavo gave Facebook complete insight into users’ phones – including which apps were used, how long they were open, and which websites were visited.

According to court documents and regulatory authorities, Facebook used this data to identify trends and map potential competitors. By analyzing user patterns in apps like Houseparty, YouTube, Amazon, and Snapchat, the company could determine which platforms posed a threat to its market dominance.

When Snapchat’s popularity began to explode in 2016, Facebook encountered a problem: encrypted traffic prevented insight into users’ behavior, reports Business Today. To circumvent this, Facebook launched an internal operation called “Project Ghostbusters”.

Facebook engineers developed specially adapted code based on Onavo’s infrastructure. The app installed a so-called root certificate on users’ phones – consent was hidden in legal documentation – which enabled Facebook to create fake certificates that mimicked Snapchat’s servers.

This made it possible to decrypt and analyze Snapchat’s traffic internally. The purpose was to use the information as a basis for strategic decisions, product development, or potential acquisitions.

Snapchat said no – Facebook copied instead

Based on data from Onavo, Facebook offered to buy Snapchat for 3 billion dollars. When Snapchat CEO Evan Spiegel declined, Facebook responded by launching Instagram Stories – a direct copy of Snapchat’s most popular feature. This became a decisive move in the competition between the two platforms.

In 2018, Apple removed Onavo from the App Store, citing that the app violated the company’s data protection rules. Facebook responded by launching a new app: Facebook Research, internally called Project Atlas, which offered similar surveillance functions. This time, the company paid users – some as young as 13 – up to 20 dollars per month to install the app.

When Apple discovered this, the company acted forcefully and revoked Facebook’s enterprise development certificates. This meant that all internal iOS apps were temporarily stopped – one of Apple’s most far-reaching measures ever.

In 2020, the Australian Competition and Consumer Commission (ACCC) sued Facebook, now called Meta, for misleading users with false promises about privacy. In 2023, Meta’s subsidiaries were fined a total of 20 million Australian dollars (approximately €11 million) for misleading behavior.

Why it still matters

Business Insider emphasizes that the Onavo story is not just about a misleading app. It also illustrates how one of the world’s most powerful tech companies built a surveillance system disguised as a privacy tool.

The fact that Facebook used the data to map competitors, copy features, and maintain control over the social media market – and also targeted underage users for data collection – raises additional ethical questions.

“Even a decade later, Onavo remains a case study in how ‘data is power’ and how far companies are willing to go to get it”, the publication concludes.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.