Saturday, August 16, 2025

Polaris of Enlightenment

Your doctor’s visit isn’t private

Published 26 July 2025
– By Naomi Brockwell
6 minute read

A member of our NBTV members’ chat recently shared something with us after a visit to her doctor.

She’d just gotten back from an appointment and felt really shaken up. Not because of a diagnosis, she was shaken because she realized just how little control she had over her personal information.

It started right at check-in, before she’d even seen the doctor.
Weight. Height. Blood pressure. Lifestyle habits. Do you drink alcohol? Are you depressed? Are you sexually active?
All the usual intake questions.

It all felt deeply personal, but this kind of data collection is normal now.
Yet she couldn’t help but wonder: shouldn’t they ask why she’s there first? How can they know what information is actually relevant without knowing the reason for the visit? Why collect everything upfront, without context?

She answered every question anyway. Because pushing back makes people uncomfortable.

Finally, she was through with the medical assistant’s questions and taken to the actual doctor. That’s when she confided something personal, something she felt was important for the doctor to know, but made a simple request:

“Please don’t record that in my file”.

The doctor responded:

“Well, this is something I need to know”.

She replied:

“Yes, that’s why I told you. But I don’t want it written down. That file gets shared with who knows how many people”.

The doctor paused, then said:

“I’m going to write it in anyway”.

And just like that, her sensitive information, something she explicitly asked to keep off the record, became part of a permanent digital file.

That quiet moment said everything. Not just about one doctor, but about a system that no longer treats medical information as something you control. Because once something is entered into your electronic health record, it’s out of your hands.

You can’t delete it.

You can’t restrict who sees it.

She Said “Don’t Write That Down.” The Doctor Did Anyway.

Financially incentivized to collect your data

The digital device that the medical assistant and doctor write your information into is called an Electronic Health Record (EHR). EHRs aren’t just a digital version of your paper file. They’re part of a government-mandated system. Through legislation and financial incentives from the HHS, clinics and hospitals were required to digitize patient data.

On top of that, medical providers are required to prove what’s called “Meaningful Use” of these EHR systems. Unless they can prove meaningful use, the medical provider won’t get their Medicare and Medicaid rebates. So when you’re asked about your blood pressure, your weight, and your alcohol use, it’s part of a quota. There’s a financial incentive to collect your data, even if it’s not directly related to your care. These financial incentives reward over-collection and over-documentation. There are no incentives for respecting your boundaries.

You’re not just talking to your doctor. You’re talking to the system

Most people have no idea how medical records actually work in the US They assume that what they tell a doctor stays between the two of them.

That’s not how it works.

In the United States, HIPAA states that your personally identifiable medical data can be shared, without needing to get your permission first, for a wide range of “healthcare operations” purposes.

Sounds innocuous enough. But the definition of health care operations is almost 400 words long. It’s essentially a list of about 65 non-clinical business activities that have nothing to do with your medical treatment whatsoever.

That includes not just hospitals, pharmacy systems, and insurance companies, but billing contractors, analytics firms, and all kinds of third-party vendors. According to a 2010 Department of Health and Human Services (HHS) regulation, there are more than 2.2 million entities (covered entities and business associates) with which your personally identifiable, sensitive medical information can be shared, if those who hold it choose to share it. This number doesn’t even include government entities with access to your data, because they aren’t considered covered entities or business associates.

Your data doesn’t stay in the clinic. It gets passed upstream, without your knowledge and without needing your consent. No one needs to notify you when your data is shared. And you’re not allowed to opt out. You can’t even get a list of everyone it’s been shared with. It’s just… out there.

The doctor may think they’re just “adding it to your chart”. But what they’re actually doing is feeding a giant, invisible machine that exists far beyond that exam room.

We have an entire video diving into the details if you’re interested: You Have No Medical Privacy

Data breaches

Legal sharing isn’t the only risk of this accumulated data. What about data breaches? This part is almost worse.

Healthcare systems are one of the top targets for ransomware attacks. That’s because the data they hold is extremely valuable. Full names, birth dates, Social Security numbers, medical histories, and billing information, all in one place.

It’s hard to find a major health system that hasn’t been breached. In fact, a 2023 report found that over 90% of healthcare organizations surveyed had experienced a data breach in the past three years.

That means if you’ve been to the doctor in the last few years, there’s a very real chance that some part of your medical file is already floating around, whether on the dark web, in a leaked ransomware dump, or being sold to data brokers.

The consequences aren’t just theoretical. In one high-profile case of such a healthcare breach, people took their own lives after private details from their medical files were leaked online.

So when your doctor says, “This is just for your chart,” understand what that really means. You’re not just trusting your doctor. You’re trusting a system that has a track record of failing to protect you.

What happens when trust breaks

Once you start becoming aware of how your data is being collected and shared, you see it everywhere. And in high-stakes moments, like a medical visit, pushing back is hard. You’re at your most vulnerable. And the power imbalance becomes really obvious.

So what do patients do when they feel that their trust has been violated? They start holding back. They say less. They censor themselves.

This is exactly the opposite of what should happen in a healthcare setting. Your relationship with your doctor is supposed to be built on trust. But when you tell your doctor something in confidence, and they say, “I’m going to log it anyway,” that trust is gone.

The problem here isn’t just one doctor. From their perspective, they’re doing what’s expected of them. The entire system is designed to prioritize documentation and compliance over patient privacy.

Privacy is about consent, not secrecy

But privacy matters. And not because you have something to hide. You might want your doctor to have full access to everything. That’s fine. But the point is, you should be the one making that call.

Right now, that choice is being stripped away by systems and policies that normalize forced disclosure.

We’re being told our preferences don’t matter. That our data isn’t worth protecting. And we’re being conditioned to stay quiet about it.

That has to change.

So what can you do?

First and foremost, if you’re in a high-stakes medical situation, focus on getting the care you need. Don’t let privacy concerns keep you from getting help.

But when you do have space to step back and ask questions, do it. That’s where change begins.

  • Ask what data is necessary and why.
  • Say no when something feels intrusive.
  • Let your provider know that you care about how your data is handled.
  • Support policy efforts that restore informed consent in healthcare.
  • Share your story, because this isn’t just happening to one person.

The more people push back, the harder it becomes for the system to ignore us.

You should be able to go to the doctor and share what’s relevant, without wondering who’s going to have access to that information later.

The exam room should feel safe. Right now, it doesn’t.

Healthcare is in urgent need of a privacy overhaul. Let’s make that happen.

 

Yours In Privacy,
Naomi

 

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

What I learnt at DEFCON

Why hacker culture is essential if we want to win the privacy war.

Published today 8:16
– By Naomi Brockwell
6 minute read

DEFCON is the world’s largest hacker conference. Every year, tens of thousands of people gather in Las Vegas to share research, run workshops, compete in capture-the-flag tournaments, and break things for sport. It’s a subculture. A testing ground. A place where some of the best minds in security and privacy come together not just to learn, but to uncover what’s being hidden from the rest of us. It’s where curiosity runs wild.

But to really get DEFCON, you have to understand the people.

What is a hacker?

I love hacker conferences because of the people. Hackers are notoriously seen as dangerous. The stereotype is that they wear black hoodies and Guy Fawkes masks.

But that’s not why they’re dangerous: They’re dangerous because they ask questions and have relentless curiosity.

Hackers have a deep-seated drive to learn how things work, not just at the surface, but down to their core.

They aren’t content with simply using tech. They want to open it up, examine it, and see the hidden gears turning underneath.

A hacker sees a device and doesn’t just ask, “What does it do?”
They ask, “What else could it do?”
“What isn’t it telling me?”
“What’s under the hood, and why does no one want me to look?”

They’re curious enough to pull back curtains others want to remain closed.

They reject blind compliance and test boundaries.
When society says “Do this,” hackers ask “Why?”

They don’t need a rulebook or external approval.
They trust their own instincts and intelligence.
They’re guided by internal principles, not external prescriptions.
They’re not satisfied with the official version. They challenge it.

Because of this, hackers are often at the fringes of society. They’re comfortable with being misunderstood or even vilified. Hackers are unafraid to reveal truths that powerful entities want buried.

But that position outside the mainstream gives them perspective: They see what others miss.

Today, the word “hack” is everywhere:
Hack your productivity.
Hack your workout.
Hack your life.

What it really means is:
Don’t accept the defaults.
Look under the surface.
Find a better way.

That’s what makes hacker culture powerful.
It produces people who will open the box even when they’re told not to.
People who don’t wait for permission to investigate how the tools we use every day are compromising us.

That insistence on curiosity, noncompliance, and pushing past the surface to see what’s buried underneath is exactly what we need in a world built on hidden systems of control.

We should all aspire to be hackers, especially when it comes to confronting power and surveillance.

Everything is computer

Basically every part of our lives runs on computers now.
Your phone. Your car. Your thermostat. Your TV. Your kid’s toys.
And much of this tech has been quietly and invisibly hijacked for surveillance.

Companies and governments both want your data. And neither want you asking how these data collection systems work.

We’re inside a deeply connected world, built on an opaque infrastructure that is extracting behavioral data at scale.

You have a right to know what’s happening inside the tech you use every day.
Peeking behind the curtain is not a crime. It’s a public service.

In today’s world, the hacker mindset is not just useful. It’s necessary.

Hacker culture in a surveillance world

People who ask questions are a nightmare for those who want to keep you in the dark.
They know how to dig.
They don’t take surveillance claims at face value.
They know how to verify what data is actually being collected.
They don’t trust boilerplate privacy policies or vague legalese.
They reverse-engineer SDKs.
They monitor network traffic.
They intercept outgoing requests and inspect payloads.

And they don’t ask for permission.

That’s what makes hacker culture so important. If we want any hope of reclaiming privacy, we need people with the skills and the willingness to pull apart the systems we’re told not to question.

On top of that, governments and corporations both routinely use outdated and overbroad legislation like the Computer Fraud and Abuse Act (CFAA) to prosecute public-interest researchers who investigate tech. Not because those researchers cause harm, but because they reveal things that others want kept hidden.

Laws like this pressure people towards compliance, and make them afraid to ask questions. The result is that curiosity feels like a liability, and it becomes harder for the average person to understand how the digital systems around us actually work.

That’s why the hacker mindset matters so much: Because no matter how hard the system pushes back, they keep asking questions.

The researchers I met at DEFCON

This year at DEFCON, I met researchers who are doing exactly that.

People uncovering surveillance code embedded in children’s toys.
People doing analysis on facial recognition SDKs.
People testing whether your photo is really deleted after “verification”.
People capturing packets who discovered that the “local only” systems you’re using aren’t local at all, and are sending your data to third parties.
People analyzing “ephemeral” IDs, and finding that your data was being stored and linked back to real identities.

You’ll be hearing from some of them on our channel in the coming months.
Their work is extraordinary, and helping all of us move towards a world of informed consent instead of blind compliance. Without this kind of research, the average person has no way to know what’s happening behind the scenes. We can’t make good decisions about the tech we use if we don’t know what it’s doing.

Make privacy cool again

Making privacy appealing is not just about education.
It’s about making it cool.

Hacker culture has always been at the forefront of turning fringe ideas into mainstream trends. Films like Hackers and The Matrix made hackers a status symbol. Movements like The Crypto Wars (when the government fought Phil Zimmermann over PGP), and the Clipper Chip fights (when they tried to standardize surveillance backdoors across hardware) made cypherpunks and privacy activists aspirational.

Hackers take the things mainstream culture mocks or fears, and make them edgy and cool.

That’s what we need here. We need a cultural transformation and to push back against the shameful language that demands we justify our desire for privacy.

You shouldn’t have to explain why you don’t want to be watched.
You shouldn’t have to defend your decision to protect your communications.

Make privacy a badge of honor.
Make privacy tools a status symbol.
Make the act of encrypting, self-hosting, and masking your identity a signal that says you’re independent, intelligent, and not easily manipulated.

Show that the people who care about privacy are the same people who invent the future.

Most people don’t like being trailblazers, because it’s scary. But if you’re reading this, you’re one of the early adopters, which means you’re already one of the fearless ones.

When you take a stand visibly, you create a quorum and make it safer for others to join in. That’s how movements grow, and we go from being weirdos in the corner to becoming the majority.

If privacy is stigmatized, reclaiming it will take bold, fearless, visible action.
The hacker community is perfectly positioned to lead that charge, and to make it safe for the rest of the world to follow.

When you show up and say, “I care about this,” you give others permission to care too.

Privacy may be on the fringe right now, but that’s where all great movements begin.

Final Thoughts

What I learnt at DEFCON is that curiosity is powerful.
Refusal to comply is powerful.
The simple act of asking questions can be revolutionary.

There are systems all around us extracting data and consolidating control, and most people don’t know how to fight that, and are too scared to try.

Hacker culture is the secret sauce.

Let’s apply this drive to the systems of surveillance.
Let’s investigate the tools we’ve been told to trust.
Let’s explain what’s actually happening.
Let’s give people the knowledge they need to make better choices.

Let’s build a world where curiosity isn’t criminalized but celebrated.

DEFCON reminded me that we don’t need to wait for permission to start doing that.

We can just do things.

So let’s start now.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Facebook’s insidious surveillance: VPN app spied on users

Mass surveillance

Published 9 August 2025
– By Editorial Staff
2 minute read

In 2013, Facebook acquired the Israeli company Onavo for approximately 120 million dollars. Onavo was marketed as a VPN app that would protect users’ data, reduce mobile usage, and secure online activities. Over 33 million people downloaded the app believing it would strengthen their privacy.

In reality, Onavo gave Facebook complete insight into users’ phones – including which apps were used, how long they were open, and which websites were visited.

According to court documents and regulatory authorities, Facebook used this data to identify trends and map potential competitors. By analyzing user patterns in apps like Houseparty, YouTube, Amazon, and Snapchat, the company could determine which platforms posed a threat to its market dominance.

When Snapchat’s popularity began to explode in 2016, Facebook encountered a problem: encrypted traffic prevented insight into users’ behavior, reports Business Today. To circumvent this, Facebook launched an internal operation called “Project Ghostbusters”.

Facebook engineers developed specially adapted code based on Onavo’s infrastructure. The app installed a so-called root certificate on users’ phones – consent was hidden in legal documentation – which enabled Facebook to create fake certificates that mimicked Snapchat’s servers.

This made it possible to decrypt and analyze Snapchat’s traffic internally. The purpose was to use the information as a basis for strategic decisions, product development, or potential acquisitions.

Snapchat said no – Facebook copied instead

Based on data from Onavo, Facebook offered to buy Snapchat for 3 billion dollars. When Snapchat CEO Evan Spiegel declined, Facebook responded by launching Instagram Stories – a direct copy of Snapchat’s most popular feature. This became a decisive move in the competition between the two platforms.

In 2018, Apple removed Onavo from the App Store, citing that the app violated the company’s data protection rules. Facebook responded by launching a new app: Facebook Research, internally called Project Atlas, which offered similar surveillance functions. This time, the company paid users – some as young as 13 – up to 20 dollars per month to install the app.

When Apple discovered this, the company acted forcefully and revoked Facebook’s enterprise development certificates. This meant that all internal iOS apps were temporarily stopped – one of Apple’s most far-reaching measures ever.

In 2020, the Australian Competition and Consumer Commission (ACCC) sued Facebook, now called Meta, for misleading users with false promises about privacy. In 2023, Meta’s subsidiaries were fined a total of 20 million Australian dollars (approximately €11 million) for misleading behavior.

Why it still matters

Business Insider emphasizes that the Onavo story is not just about a misleading app. It also illustrates how one of the world’s most powerful tech companies built a surveillance system disguised as a privacy tool.

The fact that Facebook used the data to map competitors, copy features, and maintain control over the social media market – and also targeted underage users for data collection – raises additional ethical questions.

“Even a decade later, Onavo remains a case study in how ‘data is power’ and how far companies are willing to go to get it”, the publication concludes.

Show your papers: The internet is about to change forever

A crackdown sweeping the globe is replacing the free internet with government surveillance.

Published 9 August 2025
– By Naomi Brockwell
8 minute read

A dangerous shift is happening online. All around the world, governments are quietly rewriting the rules of internet access. Soon, privacy and anonymity online may become relics of the past.

The UK’s newly enacted Online Safety Act marks a fundamental shift. You now need to verify your identity simply to watch a video, visit a website, or share your thoughts. The Act mandates strict age verification and identity checks for websites and platforms considered to host “harmful” or “adult content”.

But the definition of “harmful or adult content” is deliberately broad, encompassing every social media platform and website hosting user-generated content. This maneuver places all interactive sites under strict regulatory oversight, forcing them to implement identity verification systems. Users must now provide government ID or undergo facial recognition checks, ending the ability to browse, communicate, or consume content anonymously.

Platforms that don’t comply face massive fines. The result is that a vast portion of the internet has been seized under the guise of “safety”, threatening to erase the free and open internet we once knew.

The consequences are cascading. As this becomes increasingly normalized, nearly all platforms face pressure to demand user identification or age verification. This shift represents a major step toward eliminating online privacy. This isn’t about protecting children; it’s about ending anonymity altogether.

Global surveillance surge

If we look at the surveillance initiatives of governments around the world these past few weeks, it’s chilling. In what feels like a sudden, synchronized wave, the entire globe is moving in lockstep towards eliminating freedom on the internet. As well as the UK’s initiative:

  • Canada: A surveillance bill has just been introduced that will significantly expand online tracking. Bill C-2 mandates backdoors in apps and platforms, giving authorities real-time access to your private data and undermining encryption. It also drastically expands surveillance by allowing police warrantless access to personal details like user identities, login history, and online activities.
  • Australia: Has banned YouTube and social media platforms for users under 16, mandated face scans and government ID verification to access major internet services, and is planning to expand these invasive controls to basic online searches, embedding identity checks into everyday internet use.
  • European Union: The proposed Chat Control law will go to a final vote in October 2025. If passed, it will mandate that platforms automatically scan private messages, emails, and stored files for illegal content, including encrypted communications, effectively abolishing end-to-end encryption protections across Europe. Additionally, the Digital Services Act (DSA) requires platforms hosting user-generated content to implement age verification measures, giving platforms a 12-month grace period to roll out strict ID verification systems.
  • Switzerland: Have a surveillance law in the works that will force VPNs, messaging apps, and online platforms to log users’ identities, IP addresses, and metadata for government access, effectively ending online anonymity. Privacy-focused companies like Proton have announced plans to relocate if the law passes.
  • United States: Numerous states are rapidly introducing and passing bills mandating strict age verification and identity checks for social media platforms and other online services, pushing the country toward the same surveillance and identity-control measures seen globally.

This explains the recent wave of platforms suddenly mandating stricter ID checks, like Spotify requiring you to upload your government ID before listening to music, or YouTube using AI to infer your age and enforce restrictions. Even in countries that don’t legally require these measures, companies often roll them out globally because it’s simpler and cheaper to have a single policy everywhere. This forces every country into the same authoritarian policies, whether they wanted them or not.

But these recent requirements didn’t appear overnight. Platforms have been slowly adding more identity verification methods for years. Did all these companies independently decide to create more friction for their users? Of course not. User friction is rarely the goal.

Instead, much of this seemingly voluntary cooperation was a response to implicit government pressure. This tactic is known as “jawboning”.

Jawboning: Silent coercion

Jawboning is informal, behind-the-scenes pressure from lawmakers and regulators. No new legislation is needed. Instead, governments make quiet but clear suggestions.
Officials might tell a tech company, “we’re concerned about misinformation spreading on your platform”, or quietly warn “this app poses a national security risk, you might want to address that before we’re forced to intervene”.
The threat is implicit.

As a result, platforms have been steadily increasing their identity checks, whether through phone number verification that ties accounts to real identities, or directly asking users to submit ID documents.

Governments don’t always need legal authority. Sometimes they simply suggest something strongly enough that compliance is inevitable.

In recent years we’ve seen this tactic intensify, with governments increasingly engaging directly with social media companies to shape moderation decisions. Without formal subpoenas or official orders, platforms receive subtle yet persistent suggestions about the type of content to flag or remove, effectively steering public narratives. This informal pressure quietly influences what users can see and say online.

Some people suggest that this sudden global crackdown on privacy must have been a coordinated and deliberate strike. But there’s a simpler explanation. None of what’s happened this past week appeared out of nowhere. We’ve been setting the stage for years.

After years of incremental normalization, surveillance culture reached a critical mass. Each small change seemed minor and tolerable. Governments nudged. Companies complied. Users accepted. Bit by bit, surveillance became normalized, until we reached a tipping point. When enough incremental intrusions pile up, they set the stage for something much bigger. By the time major restrictions arrived this week, we’d already grown numb to privacy incursions. The world was primed, and now a wave of regulation has swept in almost unopposed.

The cultural shift we must fight

The internet was conceived as a tool for freedom and connection. But almost overnight, it has become a surveillance landscape where every click, view, and conversation is gated by ID checkpoints. Our greatest tool for free expression is now our greatest instrument of control.

We can’t accept this shift passively. The normalization of mandatory identity verification is deeply harmful. Privacy isn’t suspicious or criminal; it’s normal, and we must vigorously push back against these cultural changes.

This is a landslide of lost freedoms, and it’s happened in mere weeks.

Decentralized infrastructure: Our last hope

Decentralization is critical in the fight for online freedom. Centralized systems, such as those mandated by regulations like the UK’s Online Safety Act, provide easy targets for governments to enforce identity checks, age limits, and surveillance. These centralized checkpoints enable extensive monitoring and control. Decentralized infrastructure, on the other hand, distributes control across many independent participants, making it inherently resistant to intrusive mandates and significantly harder for governments to impose surveillance and censorship.

Here are just a handful of powerful decentralized tools already available, each combining decentralization with robust privacy protections:

Bitchat
Bitchat is a Bluetooth Low Energy mesh messaging network launched by Jack Dorsey’s team in July 2025. It enables peer-to-peer communication among nearby devices without requiring internet access, user accounts, or phone numbers. Users can communicate via public channels or password-protected private groups. Bitchat also supports direct private messages secured by end-to-end encryption with forward secrecy, ensuring only the intended recipients can decrypt messages. Additional privacy features include timing obfuscation and dummy traffic to protect metadata, as well as a panic mode that instantly erases all locally stored data. The mesh network becomes stronger, more secure, and more resilient as additional users run the app in proximity.

Meshtastic
Meshtastic uses small radio devices to create local mesh networks independent from the internet, helping resist centralized censorship. Users send either public or private messages. Public messages are visible to everyone, while private channels use a shared encryption key (shared securely outside the app). Meshtastic also supports direct messages encrypted end-to-end via public-key cryptography.

SimpleX chat
A serverless, peer-to-peer messaging app with no identifiers or phone numbers required. All messages are end-to-end encrypted using a double-ratchet protocol. Metadata, contact lists, and message logs remain solely on the user’s device. Private message routing further obscures IP address or network information from relay servers. More participation, by either running relay nodes yourself or using independent relay servers, makes the system stronger and more censorship-resistant.

IPFS (InterPlanetary File System)
Distributed file storage with encryption. Instead of relying on centralized servers, files are split and stored across independent nodes. Once content is pinned to multiple nodes, there’s no single point of failure. IPFS resists censorship because no central authority can easily remove or block files. More participants equals greater redundancy and resilience.

Filecoin
Filecoin provides a decentralized marketplace for data storage. Unlike centralized cloud storage, Filecoin allows users to securely contract with independent storage providers directly through its blockchain, without third-party intermediaries. Files aren’t automatically distributed; instead, they’re stored with specific providers that users contract with directly, and the Filecoin blockchain ensures data integrity through built-in cryptographic proofs verifying providers actually store your data as promised.

Zero-Knowledge proofs (ZK proofs)
Zero-knowledge proofs are a type of privacy-preserving cryptographic validation. Initially pioneered by the cryptocurrency Zcash, ZK proofs have since become essential tools in a wide range of applications beyond cryptocurrency, including decentralized identity systems, secure age verification, and anonymous credentialing. They allow you to prove sensitive attributes, such as being over a certain age, without revealing any personal details, offering robust privacy protections in many digital interactions.

Several decentralized social media platforms have emerged as promising alternatives to centralized giants like Twitter and Facebook. Platforms such as Mastodon, Nostr, Bluesky, and Matrix offer decentralized architectures in theory, spreading control across independently operated servers or nodes. In practice, however, most users currently congregate around just a few widely used nodes, creating potential points of vulnerability. Still, these platforms represent meaningful progress, and I’m genuinely optimistic about the future of decentralized social media. As more people learn to run their own independent servers and nodes, these platforms will grow increasingly robust, resilient, and truly censorship resistant.

Why these tools matter

Together, decentralization and encryption directly undermine the systems that the UK Online Safety Act and similar laws rely on, such as central checkpoints, mandated identity verification, and mass data collection. These authoritarian measures become much harder to enforce when control is distributed, data remains with individual users, and identity can be verified anonymously.

Decentralized technology is still young, and many tools currently lack the polished interfaces and extensive user bases of centralized platforms. You won’t yet find the same network effect as mainstream social networks. But decentralized technology holds immense promise. As governments increasingly mandate backdoors, identity checks, and documentation simply to communicate online, these decentralized alternatives represent the future of digital freedom. Their strength and resilience depend directly on collective adoption: running nodes, hosting relay services, and contributing to open-source development.

The moment to act is now

Privacy isn’t about hiding; it’s about autonomy. Decentralized technologies aren’t mere ideals. They’re practical tools for reclaiming power online. The more widely adopted these tools become, the more robust and resistant they are to centralized control. Let’s actively build, support, and embrace decentralized, encrypted alternatives, and reclaim the internet while we still have the chance.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

OpenAI launches GPT-5 – Here are the new features in the latest ChatGPT model

The future of AI

Published 8 August 2025
– By Editorial Staff
"GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert", claims CEO Sam Altman during the company's presentation of the new model.
2 minute read

OpenAI released its new flagship model GPT-5 on Thursday, which is now available free of charge to all users of the ChatGPT chatbot service. The American AI giant claims that the new model is “the best in the world” and takes a significant step toward developing artificial intelligence that can perform better than humans in most economically valuable work tasks.

GPT-5 differs from previous versions by combining fast responses with advanced problem-solving capabilities. While previous AI chatbots could primarily provide smart answers to questions, GPT-5 can perform complex tasks for users – such as creating software applications, navigating calendars, or compiling research reports, writes TechCrunch.

— Having something like GPT-5 would be pretty much unimaginable at any previous time in history, said OpenAI CEO Sam Altman during a press conference.

Better than competitors

According to OpenAI, GPT-5 performs somewhat better than competing AI models from companies like Anthropic, Google DeepMind, and Elon Musk’s xAI on several important tests. In programming, the model achieves 74.9 percent on real coding tasks, which marginally beats Anthropic’s latest model Claude Opus 4.1, which reached 74.5 percent.

A particularly important improvement is that GPT-5 “hallucinates” – that is, makes up incorrect information – significantly less than previous models. When tested on health-related questions, the model gives incorrect answers only 1.6 percent of the time, compared to over 12 percent for OpenAI’s previous models.

This is particularly relevant since millions of people use AI chatbots to get health advice, despite them not replacing professional doctors.

New features and pricing models

The company has also simplified the user experience. Instead of users having to choose the right settings, GPT-5 has an automatic router that determines how it should best respond – either quickly or by “thinking through” the answer more thoroughly.

ChatGPT also gets four new personalities that users can choose between: Cynic, Robot, Listener, and Nerd. These customize how the model responds without users needing to specify it in each request.

For developers, GPT-5 is launched in three sizes via OpenAI’s programming interface, with the base model priced at €1.15 per million input words and €9.20 per million generated words.

The launch comes after an intense week for OpenAI, which also released an open AI model that developers can download for free. ChatGPT has grown to become one of the world’s most popular consumer products with over 700 million users every week – nearly 10 percent of the world’s population.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.