Saturday, May 10, 2025

Polaris of Enlightenment

Ad:

Exposing the lies that keep you trapped in surveillance culture

Debunking the biggest myths about data collection

Published 19 April 2025
– By Naomi Brockwell

Let’s be honest: data is useful. But we’re constantly told that in order to benefit from modern tech—and the insights that come with it—we have to give up our privacy. That useful data only comes from total access. That once your info is out there, you’ve lost control. That there’s no point in trying to protect it anymore.

These are myths. And they’re holding us back.

The truth is, you can benefit from data-driven tools without giving away everything. You can choose which companies to trust. You can protect one piece of information while sharing another. You can demand smarter systems that deliver insights without exploiting your identity.

Privacy isn’t about opting out of technology—it’s about choosing how you engage with it.

In this issue, we’re busting four of the most common myths about data collection. Because once you understand what’s possible, you’ll see how much power you still have.

Myth #1: “I gave data to one company, so my privacy is already gone”.

This one is everywhere. Once people sign up for a social media account or share info with a fitness app, they often throw up their hands and say, “Well, I guess my privacy’s already gone”.

But that’s not how privacy works.

Privacy is about choice. It’s about context. It’s about setting boundaries that make sense for you.

Just because you’ve shared data with one company doesn’t mean you’re giving blanket permission to every app, government agency, or ad network to track you forever.

You’re allowed to:

  • Share one piece of information and protect another.
  • Say yes to one service and no to others.
  • Change your mind, rotate your identifiers, and reduce future exposure.

Privacy isn’t all or nothing. And it’s never too late to take some power back.

Myth #2: “If I give a company data, they can do whatever they want with it”.

Not if you pick the right company.

Many businesses are committed to ethical data practices. Some explicitly state in their terms that they’ll never share your data, sell it, or use it outside the scope of the service you signed up for.

Look for platforms that don’t retain unnecessary data. There are more of them out there than you think.

Myth #3: “To get insights, a company needs to see my data”.

This one’s finally starting to crumble—thanks to game-changing tech like homomorphic encryption.

Yes, really: companies can now do compute on encrypted data without ever decrypting it.

It’s already in use in financial services, research, and increasingly, consumer apps. It proves that privacy and data analysis can go hand in hand.

Imagine this: a health app computes your sleep averages, detects issues, and offers recommendations—without ever seeing your raw data. It stays encrypted the whole time.

We need to champion this kind of innovation. More research. More tools. More adoption. And more support for companies already doing it—because our business sends a signal that this investment was worth it for them, and encourages other companies to jump on board.

Myth #4: “To prove who you are, you have to hand over sensitive data.”

You’ve heard this from banks, employers, and government forms: “We need your full ID to verify who you are”.

But here’s the problem: every time we hand over sensitive data, we increase our exposure to breaches and identity theft. It’s a bad system.

There’s a better way.
With zero-knowledge proofs, we can prove things like being over 18, or matching a record—without revealing our address, birthdate, or ID number.

The tech already exists. But companies and institutions are slow to adopt it or even recognize it as legitimate. This won’t change until we demand better.

Let’s push for a world where:

  • Our identity isn’t a honeypot for hackers.
  • We can verify ourselves without becoming vulnerable.
  • Privacy-first systems are the norm—not the exception.

Takeaways

The idea that we have to trade privacy for progress is a myth. You can have both. The tools exist. The choice is ours.

Privacy isn’t about hiding—it’s about control. You can choose to share specific data without giving up your rights or exposing everything.

Keep these in mind:

  • Pick tools that respect you. Look for platforms with strong privacy practices and transparent terms.
  • Use privacy-preserving tech. Homomorphic encryption and zero-knowledge proofs are real—and growing.
  • Don’t give up just because you shared once. Privacy is a spectrum. You can always take back control.
  • Talk about it. The more people realize they have options, the faster we change the norm.

Being informed doesn’t have to mean being exploited.
Let’s demand better.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Your therapist, your doctor, your insurance plan – now in Google’s ad system

Blue Shield exposed 4.7 million patients’ private health info to Google. Your most private information may now be fueling ads, pricing decisions, and phishing scams.

Published today 7:37
– By Naomi Brockwell

The healthcare sector is one of the biggest targets for cyberattacks—and it’s only getting worse.

Every breach spills sensitive information—names, medical histories, insurance details, even Social Security numbers. But this time, it wasn’t hackers breaking down the doors.

It was Blue Shield of California leaving the front gate wide open.

Between April 2021 and January 2024, Blue Shield exposed the health data of 4.7 million members by misconfiguring Google Analytics on its websites. That’s right—your protected health information was quietly piped to Google’s advertising systems.

Let’s break down what was shared:

  • Your insurance plan name and group number
  • Your ZIP code, gender, family size
  • Patient names, financial responsibility, and medical claim service dates
  • “Find a Doctor” searches—including provider names and types
  • Internal Blue Shield account identifiers

They didn’t just leak names. They leaked context. The kind of data that paints a detailed picture of your life.

And what’s worse—most people have become so numb to these data breaches that the most common response is “Why should I care?”

Let’s break it down.

1. Health data is deeply personal

This isn’t just a password or an email leak. This is your health. Your body. Your medical history. Maybe your therapist. Maybe a cancer screening. Maybe reproductive care.

This is the kind of stuff people don’t even tell their closest friends. Now imagine it flowing into a global ad system run by one of the biggest surveillance companies on earth.

Once shared, you don’t get to reel it back in. That vulnerability sticks.

2. Your family’s privacy is at risk—even if it was your data

Health information doesn’t exist in a vacuum. A diagnosis on your record might reveal a hereditary condition your children could carry. A test result might imply something about your partner. An STD might not just be your business.

This breach isn’t just about people directly listed on your health plan—it’s about your entire household being exposed by association. When sensitive medical data is shared without consent, it compromises more than your own privacy. It compromises your family’s.

3. Your insurance rates could be affected—without your knowledge

Health insurers already buy data from brokers to assess risk profiles. They don’t need your full medical chart to make decisions—they just need signals: a recent claim, a high-cost provider, a chronic condition inferred from your search history or purchases.

Leaks like this feed that ecosystem.

Even if the data is incomplete or inaccurate, it can still be used to justify higher premiums—or deny you coverage entirely. And good luck challenging that decision. The burden of proof rarely falls on the companies profiling you. It falls on you.

4. Leaked health data fuels exploitative advertising

When companies know which providers you’ve visited, which symptoms you searched, or what procedures you recently underwent, it gives advertisers a disturbingly precise psychological profile.

This kind of data isn’t used to help you—it’s used to sell to you.
You might start seeing ads for drugs, miracle cures, or dubious treatments. You may be targeted with fear-based campaigns designed to exploit your pain, anxiety, or uncertainty. And it can all feel eerily personal—because it is.

This is surveillance operating in a very predatory form. In recent years, the FTC has cracked down on companies like BetterHelp and GoodRx for leaking health data to Facebook and Google to power advertising algorithms.

This breach could be yet another entry in the growing pattern of companies exploiting your data to target you.

5. It’s a goldmine for hackers running spear phishing campaigns

Hackers don’t need much to trick you into clicking a malicious link. But when they know:

  • Your doctor’s name
  • The date you received care
  • How much you owed
  • Your exact insurance plan and member ID

…it becomes trivially easy to impersonate your provider or insurance company.

You get a message that looks official. It references a real event in your life. You click. You log in. You enter your bank info.
And your accounts are drained before you even realize what happened.

6. You can’t predict how this data will be used—and that’s the problem

We tend to underestimate the power of data until it’s too late. It feels abstract. It doesn’t hurt.

But data accumulates. It’s cross-referenced. Sold. Repackaged. Used in ways you’ll never be told—until you’re denied a loan, nudged during an election, or flagged as a potential problem.

The point isn’t to predict every worst-case scenario. It’s that you shouldn’t have to. You should have the right to withhold your data in the first place.

Takeaways

The threat isn’t always a hacker in a hoodie. Sometimes it’s a quiet decision in a California boardroom that compromises millions of people at once.

We don’t get to choose when our data becomes dangerous. That choice is often made for us—by corporations we didn’t elect, using systems we can’t inspect, in a market that treats our lives as inventory.

But here’s what we can do:

  • Choose tools that don’t monetize your data. Every privacy-respecting service you use sends a signal.
  • Push for legislation that treats data like what it is—power. Demand the right to say no.
  • Educate others. Most people still don’t realize how broken the system is. Be the reason someone starts paying attention.
  • Support organizations building a different future. Privacy won’t win by accident. It takes all of us.

Control over your data is control over your future—and while that control is slipping, we’re not powerless.

We can’t keep waiting for the next breach to “wake people up.” Let this be the one that shifts the tide.

Privacy isn’t about secrecy. It’s about consent. And you never consented to this.

So yes, you should care. Because when your health data is treated as a business asset instead of a human right, no one is safe—unless we fight back.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

PoX: New memory chip from China sets speed record

Published 7 May 2025
– By Editorial Staff
Fudan engineers are now working on scaling up the technology and developing new prototypes.

A research team at Fudan University in China has developed the fastest semiconductor memory reported to date. The new memory, called PoX, is a type of non-volatile flash memory that can write a single bit in just 400 picoseconds – equivalent to about 25 billion operations per second.

The results were recently published in the scientific journal Nature and unlike traditional RAM (such as SRAM and DRAM), which is fast but erases data in the event of a power outage, non-volatile memory such as flash retains stored information without power. The problem has been that these memories are significantly slower – often thousands of times – which is a bottleneck for today’s AI systems that handle huge amounts of data in real time.

The research team, led by Professor Zhou Peng, achieved the breakthrough by replacing silicon channels with two-dimensional Dirac graphene – a material that allows extremely fast charge transfer. By fine-tuning the so-called “Gaussian length” of the channel, the researchers were able to create a phenomenon they call two-dimensional superinjection, which allows effectively unlimited charge transfer to the memory storage.

Using AI‑driven process optimization, we drove non‑volatile memory to its theoretical limit. This paves the way for future high‑speed flash memory, Zhou told the Chinese news agency Xinhua.

“Opens up new applications”

Co-author Liu Chunsen compares the difference to going from a USB flash drive that can do 1,000 writes per second to a chip that does a billion – in the same amount of time.

The technology combines low power consumption with extreme speed and could be particularly valuable for AI in battery-powered devices and systems with limited power supplies. If PoX can be mass-produced, it could reduce the need for separate caches, cut energy use and enable instant start-up of computers and mobiles.

Fudan engineers are now working on scaling up the technology and developing prototypes. No commercial partnerships have yet been announced.

– Our breakthrough can reshape storage technology, drive industrial upgrades and open new application scenarios, Zhou asserts.

Without consent

How parents unknowingly build surveillance files on their children.

Published 3 May 2025
– By Naomi Brockwell

Your child’s first digital footprint isn’t made by them—it’s made by you

What does the future look like for your child?

Before they can even talk, many kids already have a bigger digital footprint than their parents did at 25.

Every ultrasound shared on Facebook.
Every birthday party uploaded to Instagram.
Every proud tweet about a funny thing they said.

Each post seems harmless—until you zoom out and realize you’re building a permanent, searchable, biometric dossier on your child, curated by you.

This isn’t fearmongering. It’s the reality of a world where data is forever.
And it’s not just your friends and family who are watching.

Your kid is being profiled before they hit puberty

Here’s the uncomfortable truth:

When you upload baby photos, you’re training facial recognition databases on their face—at every age and stage.

When you post about their interests, health conditions, or behavior, you’re populating detailed profiles that can predict who they might become.

These profiles don’t just sit idle.
They’re analyzed, bought, and sold.

By the time your child applies for a job or stands up for something they believe in, they may already be carrying a hidden score assigned by an algorithm—built on data you posted.

When their childhood data comes back to haunt them

Imagine your child years from now, applying for a travel visa, a job, or just trying to board a flight.

A background check pulls information from facial recognition databases and AI-generated behavior profiles—flagging them for additional scrutiny based on “historic online associations”.

They’re pulled aside. Interrogated. Denied entry. Or worse, flagged permanently.

Imagine a future law that flags people based on past “digital risk indicators”—and your child’s online record becomes a barrier to accessing housing, education, or financial services.

Insurance companies can use their profile to label them a risky customer.

Recruiters might quietly filter them out based on years-old digital behavior.

Not because they did something wrong—but because of something you once shared.

Data doesn’t disappear.
Governments change. Laws evolve.
But surveillance infrastructure rarely gets rolled back.

And once your child’s data is out there, it’s out there forever.
Feeding systems you’ll never see.
Controlled by entities you’ll never meet.

For purposes you’ll never fully understand.

The rise of biometric surveillance—and why it targets kids first

Take Discord’s new AI selfie-based age verification. To prove they’re 13+, children are encouraged to submit selfies—feeding sensitive biometric data into AI systems.

You can change your password. You can’t change your face.

And yet, we’re normalizing the idea that kids should hand over their most immutable identifiers just to participate online.

Some schools already collect facial scans for attendance. Some toys use voice assistants that record everything your child says.

Some apps marketed as “parental control” tools grant third-party employees backend access to your child’s texts, locations—even live audio.

Ask yourself: Do you trust every single person at that company with your child’s digital life?

“I know you love me, and would never do anything to harm me…”

In the short film Without Consent, by Deutsche Telekom, a future version of a young girl named Ella speaks directly to her parents. She pleads with them to protect her digital privacy before it’s too late.

She imagines a future where:

  • Her identity is stolen.
  • Her voice is cloned to scam her mom into sending money.
  • Her old family photo is turned into a meme, making her a target of school-wide bullying.
  • Her photos appear on exploitation sites—without her knowledge or consent.

It’s haunting because it’s plausible.

This is the world we’ve built.
And your child’s data trail—your posts—is the foundation.

The most powerful privacy lesson you can teach? How you live online.

Children learn how to navigate the digital world by watching you.

What are you teaching them if you trade their privacy for likes?

The best gift you can give them isn’t a new device—it’s the mindset and tools to protect themselves in a world that profits from their exposure.

Even “kid-safe” tech often betrays that trust.

Baby monitors have leaked footage.

Tracking apps have glitched and exposed locations of random children (yes, really).

Schools collect and store sensitive information with barely any safeguards—and breaches happen all the time.

How to protect your child’s digital future

Stop oversharing
Avoid posting photos, birthdays, locations, or anecdotes about your child online—especially on platforms that monetize engagement.

Ditch spyware apps
Instead of surveillance, foster open dialogue. If monitoring is necessary, choose open-source, self-hosted tools where you control the data—not some faceless company.

Teach consent early
Help your child understand that their body, thoughts, and information are theirs to control. Make digital consent a family value.

Opt out of biometric collection
Say no to tools that demand selfies, facial scans, or fingerprints. Fight back against the normalization of biometric surveillance for kids.

Use aliases and VoIP numbers
When creating accounts for your child, use email aliases and VoIP numbers to avoid linking their real identity across platforms.

Push schools and apps for better policies
Ask your child’s school: What data do they collect? Who has access? Is it encrypted?
Push back on apps that demand unnecessary permissions. Ask hard questions.

This isn’t paranoia—it’s parenting in the digital age

This is about protecting your child’s right to grow up without being boxed in by their digital past.

About giving them the freedom to explore ideas, try on identities, and make mistakes—without it becoming a permanent record.

Privacy is protection.
It’s dignity.
It’s autonomy.

And it’s your job to help your child keep it.
Let’s give the next generation a chance to write their own story.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Researchers: Soon impossible to detect AI deepfakes

The future of AI

Published 2 May 2025
– By Editorial Staff
Already today, it can be difficult to distinguish manipulated images and videos from the real thing - and soon it may become virtually impossible.

The most advanced AI-generated deepfake movies can now fake people’s heartbeats so convincingly that even specially developed detectors are fooled, according to a new study from Humboldt-Universität in Berlin.

The researchers’ findings raise concerns that technological developments may soon make manipulated material indistinguishable from authentic images and movies.

In the past, detectors have used a special method (remote photoplethysmography, or rPPG) that analyzes tiny color changes in the skin to detect a pulse – as a relatively reliable indicator of whether a film clip is genuine or not.

But in the new study, the researchers created 32 deepfake movies that not only looked real to human eyes, but also imitated pulse beats. When these videos were tested against an rPPG-based detector, they were incorrectly classified as genuine.

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect”, said Professor Peter Eisert, the study’s lead author, in a statement.

Increases risk of fraud

According to the study, pulse signals from original videos can be “inherited” by deepfakes in that AI models replicate slight variations in skin tone and blood flow and heat maps in the study showed near-identical light changes in both genuine and manipulated videos.

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video”, Eisert further explains,

These advances increase the risks of deepfake’s use in financial fraud, disinformation and non-consensual pornography, among others. In 2023, an independent researcher estimated that over 244,000 manipulated videos were uploaded to the 35 largest deepfake pornography sites in a single week.

Technical arms race

Despite the study’s worrying results, there is some hope of reversing the trend. The researchers note that today’s deepfakes still fail to replicate natural variations in blood flow over time. In addition, tech giants such as Adobe and Google are developing watermarks to mark AI-generated material.

Meanwhile, the US Congress recently passed the Take It Down Act, which criminalizes the dissemination of non-consensual sexual images – including AI-generated ones. But experts warn that the technological arms race between creators and detectors requires constant adaptation.

This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms”, the study points out, noting that as AI development accelerates, the fight against digital fraud is also becoming more urgent.

Others have raised a very different concern – that the widespread proliferation of AI-engineered material could be used as a pretext for tougher censorship and laws that restrict people’s freedom online in various ways.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.