Saturday, July 26, 2025

Polaris of Enlightenment

Buying someone’s real-time location is shockingly cheap

You need to stop handing out your cell number. Seriously.

Published 5 July 2025
– By Naomi Brockwell
11 minute read

Most people have no idea how exposed they are.

Your location is one of the most sensitive pieces of personal information, and yet it’s astonishingly easy to access. For just a few dollars, someone can track your real-time location without ever needing to hack your phone.

This isn’t science fiction or a rare edge case. It’s a thriving industry.

Telecom providers themselves have a long and disturbing history of selling customer location data to data brokers, who then resell it with little oversight.

In 2018, The New York Times exposed how major U.S. carriers, including AT&T, Verizon, T-Mobile, and Sprint, were selling access to phone location data. This data was ultimately accessed by bounty hunters and law enforcement, without user consent or a warrant.

A 2019 investigation by Vice showed that you could buy the real-time location of nearly any phone in the U.S. for about $300.

Other vendors advertise this service for as little as $5 on underground forums and encrypted messaging channels. No need to compromise someone’s device, just give them a phone number.

The big takeaway from this article is that if someone has your number, they can get your location. We’re going to go over how to shut this tracking method down.

Whether you’re an activist, journalist, or just someone who values your right to privacy, this newsletter series is designed to give you the tools to disappear from unwanted tracking, one layer at a time.

How cell numbers leak location

Your cell number is a real-time tracking beacon. Every time your phone is powered on, it talks to nearby cell towers. This happens even if you’re not making a call.

Your phone’s location is continuously updated in a database called the Home Location Register (HLR), which lets your carrier know which tower to route calls and texts through. If someone has access to your number, they can locate you, sometimes within meters, in real time. Here are some ways they can do it:

1. Access to telecom infrastructure

Selling data / corrupting employees:

Telecom providers are notorious for selling customers’ location data directly from their HLR. Alternatively, unauthorized individuals or entities can illegally access this data by bribing or corrupting telecom employees who have direct access to the HLR.

The data retrieved from the HLR database reveals only which specific cell tower your phone is currently registered to, and typically identifies your approximate location within tens or hundreds of meters, depending on tower density in the area.

To pinpoint your exact location with greater precision, down to just a few meters, requires additional specialized methods, such as carrier-based triangulation. Triangulation involves actively measuring your phone’s signal strength or timing from multiple cell towers simultaneously. Such detailed, real-time triangulation is typically restricted to telecom companies and authorized law enforcement agencies. However, these advanced methods can also be misused if telecom personnel or authorized entities are compromised through bribery or corruption.

Exploiting the SS7 protocol (telecom network vulnerabilities):

Attackers can also exploit vulnerabilities such as those in SS7, a global telecom signaling protocol, to illicitly request your current cell tower location from the HLR database. SS7 itself doesn’t store any location data — it provides the means to query your carrier’s HLR and retrieve your current tower association.

2. IMSI catchers (“Stingrays”): Your phone directly reveals its location

IMSI catchers (often called “Stingrays”) are specialized surveillance devices acting as fake cell towers. Your phone constantly searches for the strongest available cell signal, automatically connecting to these fake towers if their signals appear stronger than legitimate ones.

In this method, instead of querying telecom databases, your phone directly reveals its own location to whoever is operating the fake cell tower, as soon as the phone connects. Operators of IMSI catchers measure signal strength between your phone and their device, enabling precise location tracking, often accurate within a few meters.

While IMSI catchers were initially developed and primarily used by law enforcement and intelligence agencies, the legality of their use (even by authorities) is subject to ongoing debate. Unauthorized versions of IMSI catchers have also become increasingly available on black and gray markets.

The solution? Move to VoIP

Cell numbers use your phone’s baseband processor to communicate directly with cell towers over the cellular network, continuously updating your physical location in telecom databases.

VoIP numbers (Voice over Internet Protocol), on the other hand, transmit calls and texts through the internet using data connections. They don’t keep HLR records, and so they’re immune to tower-based location tracking.

Instead, the call or message is routed through internet infrastructure and only connects to the cellular network at carrier-level switching stations, removing the direct tower-based tracking of your physical location.

So the takeaway is that you want to stop using cell numbers, and start using VoIP number instead, so that anyone who knows your number isn’t able to use it to track your location.

But there’s a catch: VoIP is heavily regulated. In most countries, quality VoIP options are scarce, and short code SMS support is unreliable. In the US, though, there are good tools.

Action items:

1. Get a VoIP provider

Two good apps that you can download where you can generate VoIP numbers in the U.S. are:

  • MySudo: Great for compartmentalizing identity. Up to 9 identities/numbers per account.
  • Cloaked.com: Great for burner/throwaway numbers.

We are not sponsored by or affiliated with any of the companies mentioned here, they’re just tools I use and like. If you have services that you like and recommend, please let others know in the comments!

Setting up MySudo

Step 1: Install the app

  • You will need a phone with the Google Play Store or the Apple App Store.
  • Search for MySudo, download and install it, or visit the store directly via their webpage.

Step 2: Purchase a plan

  • $15/month gets you up to 9 Sudo profiles, each with its own number. Or you can start with just 1 number for $2/month. You will purchase this plan inside the app store on your phone.

Step 3: Set up your first Sudo profile

When prompted, create your first Sudo profile. Think of this as a separate, compartmentalized identity within MySudo, distinct from your main user account.

Each Sudo profile can include:

  • A dedicated phone number
  • Optional extras like an email alias, username handle, virtual credit card, etc.

For now, we’re focusing only on phone numbers:

  • Choose a purpose for this profile (such as Shopping, Medical, Work). This purpose will appear as a heading in your list of Sudos.
  • Create a name for your Sudo profile (I usually match this to the chosen purpose).

Step 4: Add a phone number to your Sudo

  • Tap the Sudo icon in the top-left corner.
  • Select the Sudo profile you just created.
  • Tap “Add a Phone Number.”
  • Select your preferred country, then enter a city name or area code.
  • Pick a number from the available options, then tap “Choose Number.”

You’re now set up and ready to use your VoIP number!

Step 4: Compartmentalize

You don’t need to assign all 9 numbers right away. But here are helpful categories you might consider:

  • Friends and family
  • Work
  • Government
  • Medical
  • Banking
  • Purchases
  • Anonymous purchases
  • High-risk anonymous use
  • Catch-all / disposable

Incoming calls go through the MySudo app, not your default dialer. Same with SMS. The person on the other end doesn’t know it’s VoIP.

Short codes don’t always work

Short codes (such as verification codes sent by banks or apps) use a special messaging protocol that’s different from regular SMS texts. Many VoIP providers don’t consistently support short codes, because this capability depends entirely on the underlying upstream provider (the entity that originally provisioned these numbers) not on the VoIP reseller you purchased from.

If you encounter problems receiving short codes, here are ways around the issue:

  • Use the “Call Me” option:
    Many services offer an alternative verification method: a phone call delivering the verification code verbally. VoIP numbers handle these incoming verification calls without any issue.
  • Try another VoIP provider (temporary):
    If a service blocks your primary VoIP number and insists on a real cellular number, you can borrow a non‑VoIP SIM verification service like SMSPool.net. They provide actual cell‑based phone numbers via the internet, but note: these are intended for temporary or burner use only. Don’t rely on rented numbers from these services for important or long-term accounts, always use stable, long-term numbers for critical purposes.
  • Register using a real cell number and port it to VoIP:
    For critical accounts, another option is to use a prepaid SIM card temporarily to register your account, then immediately port that number to a VoIP provider (such as MySudo or Google Voice). Many services only check whether a number is cellular or VoIP during initial account registration, and don’t recheck later.
  • Maintain a separate SIM just for critical 2FA:
    If you find that after porting, you still can’t reliably receive certain verification codes (particularly short codes), you might need to maintain a separate, dedicated SIM and cellular number exclusively for receiving critical two-factor authentication (2FA) codes. Do not share this dedicated SIM number with everyone, and do not use it for regular communications.

Important caveat for high-risk users:

Any SIM cards placed into the same phone are linked together by the telecom carrie, which is important information for high-risk threat models. When you insert a SIM card into your device, the SIM itself will independently send special messages called “proactive SIM messages” to your carrier. These proactive messages:

  • Completely bypass your phone’s operating system (OS), making them invisible and undetectable from user-level software.
  • Contain device-specific identifiers such as the IMEI or IMEISV of your phone and also usually include the IMEI of previous devices in which the SIM was inserted.

If your threat model is particularly high-risk and requires total compartmentalization between identities or numbers, always use separate physical devices for each compartmentalized identity. Most people don’t need to take such extreme precautions, as this generally falls outside their threat model.

Cloaked.com for burner numbers

  • Offers unlimited, disposable phone numbers.
  • Great for one-off verifications, restaurants, or merchants.
  • Doesn’t require installing an app, you can just use it in the browser and never link any forwarding number.
  • Be aware that if any of the VoIP numbers you generated inside Cloaked hasn’t received any calls or messages for 60 days, it enters a watch period. After an additional 60 days without receiving calls or messages (120 days total of inactivity), you lose the number, and it returns to the available pool for someone else to use. Only use Cloaked for numbers you expect to actively receive calls or messages on, or for temporary use where losing the number isn’t an issue.

What to do with your current cell number

Your cell number is already everywhere: breached databases, government forms, medical records, and countless other places. You can’t “un-breach” it, and you don’t want to lose that number because it’s probably an important number that people know they can contact you on. But you can stop it from being used to track you.

Solution: Port your existing cell number to a VoIP Provider

Best choice: Google Voice (recommended due to strong security protections)

  • You can choose to just pay a one-time $20 fee, which turns the number into a receiving-only number. You’ll get to receive calls and texts forever on this number with no ongoing fees.
  • Or you can choose to pay an ongoing monthly fee, which will allow you to continue to make outgoing calls and send outgoing messages from the number.

The one-time fee option will be sufficient for most people, because the aim is to gradually make this existing number obsolete and move people over to your new VoIP numbers.

Google Voice is considered a strong option because the threat of SIM swapping (where an attacker fraudulently takes control of your phone number) is very real and dangerous. Unlike basically every other telecom provider, Google lets you secure your account with a hardware security key, making it significantly harder for attackers to port your number away from your control.

Google obviously is not a privacy-respecting company, but remember, your existing cell number isn’t at all private anyway. The idea is to eventually stop using this number completely, while still retaining control of it.

How to port your existing cell number to Google Voice

  1. Check porting eligibility
    Visit the Google Voice porting tool and enter your number to verify it’s eligible.
  2. Start the port-in process
    • Navigate to Settings → Phones tab → Change / Port.
    • Select “I want to use my mobile number” and follow the on-screen prompts
  3. Pay the one-time fee
    A $20 fee is required to port your number into Google Voice
  4. Complete the porting process
    • Enter your carrier account details and submit the request. Porting generally completes within 24–48 hours, though it can take longer in some cases.
  5. Post-port setup
    • Porting your number to Google Voice cancels your old cellular service. You’ll need a new SIM or plan for regular mobile connectivity, but you’ll ideally only use this new SIM for data, and use your VoIP numbers for communication not the associated cell number.
    • Configure call forwarding, voicemail transcription, and text forwarding to email from the Google Voice Settings page.

Now, even if someone tries to look you up via your old number, they can’t get your real-time location. It’s no longer tied to a SIM that is logging your location in HLRs.

Summary: Take it one step at a time

Switching to VoIP numbers is a big change, so take it step by step:

  1. Download your VoIP apps of choice (like MySudo) and set up your new numbers.
  2. Gradually migrate your contacts to your new VoIP numbers.
  3. Use burner numbers (via Cloaked or similar services) for reservations, merchants, or anyone who doesn’t genuinely need your permanent number.

Keep your existing SIM active for now, until you’re comfortable and confident using the new VoIP system.

When ready, finalize your migration:

  1. Port your original cell number to Google Voice.
  2. Get a new SIM card with a fresh number, but don’t use this new number for calls, texts, or identification.
  3. Use the new SIM solely for data connectivity.

This completes your migration, significantly enhancing your privacy and reducing your exposure to location tracking.

GrapheneOS users

You can’t currently purchase your MySudo subscription directly on a GrapheneOS device. Instead, you’ll first need to buy your MySudo plan through the Google Play Store or Apple App Store using another device.

Once you’ve purchased your plan, you can migrate your account to your GrapheneOS phone:

  1. On your GrapheneOS device, download and install MySudo from your preferred app store (I personally like the Aurora store as a front-end for the Google Play Store).
  2. Open MySudo on your GrapheneOS device and navigate to:
    Settings → Backup & Import/Export → Import from Another Device
  3. Follow the on-screen prompts to securely migrate your entire account over to your GrapheneOS phone.

You can retain your original device as a secure backup for messages and account data.

To ensure reliable, real-time notifications for calls and messages, make sure sandboxed Google Play is enabled on the GrapheneOS profile where you’re using MySudo.

What you’ve achieved

You now have:

  • Up to 9 persistent, compartmentalized VoIP numbers via MySudo.
  • Disposable, on-demand burner numbers via Cloaked.
  • Your original cell number safely ported to Google Voice and secured with a hardware security key.
  • A clear plan for transitioning away from your original cell number.

You’ve replaced a vulnerable, easily trackable cell identifier. Your real-time location is no longer constantly broadcast through cell towers via a number that is identified as belonging to you, your digital identities are better compartmentalized, and you’re significantly harder to track or exploit.

This marks the beginning of a safer digital future. What’s next? More layers, better privacy tools, and greater freedom. Remember, privacy isn’t a destination, it’s a lifestyle. You’re now firmly on that path.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Proton launches privacy-focused AI assistant to compete with ChatGPT

The future of AI

Published today 12:24
– By Editorial Staff
The AI assistant Lumo neither stores nor trains on users' conversations and can be used freely without login.
2 minute read

Proton challenges ChatGPT with its new AI assistant Lumo, which promises to never store or train on users’ conversations. The service launches with end-to-end encryption and stricter privacy protections than competing AI services.

The Swiss company Proton, known for its secure email services and VPN solutions, is now expanding into artificial intelligence with the launch of AI assistant Lumo. Unlike established competitors such as OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude, Proton markets its service with promises to never log, store or train its models on users’ questions or conversations.

Lumo can, just like other AI assistants, help users with everyday tasks such as rephrasing emails, summarizing documents and reviewing code. The major difference lies in privacy protection – all chats are end-to-end encrypted and not stored on Proton’s servers.

Privacy-focused alternative in the AI jungle

Proton’s strategy differs markedly from industry standards. ChatGPT stores conversations for 30 days for security reasons, even when chat history is turned off. Gemini may retain user queries for up to 72 hours, while Claude saves chats for up to a month, or longer if they are flagged for review.

An additional advantage for Proton is the company’s Swiss base, which means stricter privacy laws compared to American competitors who may be forced to hand over user data to authorities.

The company has not confirmed which models are used, but Lumo likely builds on smaller, community-developed systems rather than the massive, privately trained models that power services like ChatGPT. This may mean that responses become less detailed or nuanced.

Three service tiers

Lumo is available via the web as well as through apps for iOS and Android. The service is offered in three tiers: two free options and a paid version.

Guest users can ask a limited number of questions per week without an account, but chat history is not saved. Users with free Proton accounts automatically get access to Lumo Free, which includes basic encrypted chat history and support for smaller file uploads.

The paid version Lumo Plus costs approximately $12.99 per month ($9.99 with annual billing) and offers unlimited chats, longer chat history and support for larger file uploads. The price undercuts competitors – ChatGPT Plus, Gemini Advanced and Claude Pro all cost around $20 monthly.

The question that remains to be answered is how well Lumo will compete with models trained on significantly larger datasets. The most advanced AI assistants are powered by enormous amounts of user data, which helps them learn patterns and understand nuances for continuous improvement over time. Proton’s more limited, privacy-centered strategy may affect performance.

Your doctor’s visit isn’t private

Published today 8:14
– By Naomi Brockwell
6 minute read

A member of our NBTV members’ chat recently shared something with us after a visit to her doctor.

She’d just gotten back from an appointment and felt really shaken up. Not because of a diagnosis, she was shaken because she realized just how little control she had over her personal information.

It started right at check-in, before she’d even seen the doctor.
Weight. Height. Blood pressure. Lifestyle habits. Do you drink alcohol? Are you depressed? Are you sexually active?
All the usual intake questions.

It all felt deeply personal, but this kind of data collection is normal now.
Yet she couldn’t help but wonder: shouldn’t they ask why she’s there first? How can they know what information is actually relevant without knowing the reason for the visit? Why collect everything upfront, without context?

She answered every question anyway. Because pushing back makes people uncomfortable.

Finally, she was through with the medical assistant’s questions and taken to the actual doctor. That’s when she confided something personal, something she felt was important for the doctor to know, but made a simple request:

“Please don’t record that in my file”.

The doctor responded:

“Well, this is something I need to know”.

She replied:

“Yes, that’s why I told you. But I don’t want it written down. That file gets shared with who knows how many people”.

The doctor paused, then said:

“I’m going to write it in anyway”.

And just like that, her sensitive information, something she explicitly asked to keep off the record, became part of a permanent digital file.

That quiet moment said everything. Not just about one doctor, but about a system that no longer treats medical information as something you control. Because once something is entered into your electronic health record, it’s out of your hands.

You can’t delete it.

You can’t restrict who sees it.

She Said “Don’t Write That Down.” The Doctor Did Anyway.

Financially incentivized to collect your data

The digital device that the medical assistant and doctor write your information into is called an Electronic Health Record (EHR). EHRs aren’t just a digital version of your paper file. They’re part of a government-mandated system. Through legislation and financial incentives from the HHS, clinics and hospitals were required to digitize patient data.

On top of that, medical providers are required to prove what’s called “Meaningful Use” of these EHR systems. Unless they can prove meaningful use, the medical provider won’t get their Medicare and Medicaid rebates. So when you’re asked about your blood pressure, your weight, and your alcohol use, it’s part of a quota. There’s a financial incentive to collect your data, even if it’s not directly related to your care. These financial incentives reward over-collection and over-documentation. There are no incentives for respecting your boundaries.

You’re not just talking to your doctor. You’re talking to the system

Most people have no idea how medical records actually work in the US They assume that what they tell a doctor stays between the two of them.

That’s not how it works.

In the United States, HIPAA states that your personally identifiable medical data can be shared, without needing to get your permission first, for a wide range of “healthcare operations” purposes.

Sounds innocuous enough. But the definition of health care operations is almost 400 words long. It’s essentially a list of about 65 non-clinical business activities that have nothing to do with your medical treatment whatsoever.

That includes not just hospitals, pharmacy systems, and insurance companies, but billing contractors, analytics firms, and all kinds of third-party vendors. According to a 2010 Department of Health and Human Services (HHS) regulation, there are more than 2.2 million entities (covered entities and business associates) with which your personally identifiable, sensitive medical information can be shared, if those who hold it choose to share it. This number doesn’t even include government entities with access to your data, because they aren’t considered covered entities or business associates.

Your data doesn’t stay in the clinic. It gets passed upstream, without your knowledge and without needing your consent. No one needs to notify you when your data is shared. And you’re not allowed to opt out. You can’t even get a list of everyone it’s been shared with. It’s just… out there.

The doctor may think they’re just “adding it to your chart”. But what they’re actually doing is feeding a giant, invisible machine that exists far beyond that exam room.

We have an entire video diving into the details if you’re interested: You Have No Medical Privacy

Data breaches

Legal sharing isn’t the only risk of this accumulated data. What about data breaches? This part is almost worse.

Healthcare systems are one of the top targets for ransomware attacks. That’s because the data they hold is extremely valuable. Full names, birth dates, Social Security numbers, medical histories, and billing information, all in one place.

It’s hard to find a major health system that hasn’t been breached. In fact, a 2023 report found that over 90% of healthcare organizations surveyed had experienced a data breach in the past three years.

That means if you’ve been to the doctor in the last few years, there’s a very real chance that some part of your medical file is already floating around, whether on the dark web, in a leaked ransomware dump, or being sold to data brokers.

The consequences aren’t just theoretical. In one high-profile case of such a healthcare breach, people took their own lives after private details from their medical files were leaked online.

So when your doctor says, “This is just for your chart,” understand what that really means. You’re not just trusting your doctor. You’re trusting a system that has a track record of failing to protect you.

What happens when trust breaks

Once you start becoming aware of how your data is being collected and shared, you see it everywhere. And in high-stakes moments, like a medical visit, pushing back is hard. You’re at your most vulnerable. And the power imbalance becomes really obvious.

So what do patients do when they feel that their trust has been violated? They start holding back. They say less. They censor themselves.

This is exactly the opposite of what should happen in a healthcare setting. Your relationship with your doctor is supposed to be built on trust. But when you tell your doctor something in confidence, and they say, “I’m going to log it anyway,” that trust is gone.

The problem here isn’t just one doctor. From their perspective, they’re doing what’s expected of them. The entire system is designed to prioritize documentation and compliance over patient privacy.

Privacy is about consent, not secrecy

But privacy matters. And not because you have something to hide. You might want your doctor to have full access to everything. That’s fine. But the point is, you should be the one making that call.

Right now, that choice is being stripped away by systems and policies that normalize forced disclosure.

We’re being told our preferences don’t matter. That our data isn’t worth protecting. And we’re being conditioned to stay quiet about it.

That has to change.

So what can you do?

First and foremost, if you’re in a high-stakes medical situation, focus on getting the care you need. Don’t let privacy concerns keep you from getting help.

But when you do have space to step back and ask questions, do it. That’s where change begins.

  • Ask what data is necessary and why.
  • Say no when something feels intrusive.
  • Let your provider know that you care about how your data is handled.
  • Support policy efforts that restore informed consent in healthcare.
  • Share your story, because this isn’t just happening to one person.

The more people push back, the harder it becomes for the system to ignore us.

You should be able to go to the doctor and share what’s relevant, without wondering who’s going to have access to that information later.

The exam room should feel safe. Right now, it doesn’t.

Healthcare is in urgent need of a privacy overhaul. Let’s make that happen.

 

Yours In Privacy,
Naomi

 

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Now you’re forced to pay for Facebook or be tracked by Meta

Mass surveillance

Published 22 July 2025
– By Editorial Staff
2 minute read

Social media giant Meta is now implementing its criticized “pay or be tracked” model for Swedish users. Starting Thursday, Facebook users in Sweden and some other EU-countries are forced to choose between paying €7 per month for an ad-free experience or accepting extensive data collection. Meanwhile, the company faces daily fines from the EU if the model isn’t changed.

Swedish Facebook users have been greeted since Thursday morning with a new choice when logging into the platform. A message informs them that “you must make a choice to use Facebook” and explains that users “have a legal right to choose whether you want to consent to us processing your personal data to show you ads.”

Screenshot from Facebook.

The choice is between two alternatives: either pay €7 monthly for an ad-free Facebook account where personal data isn’t processed for advertising, or consent to Meta collecting and using personal data for targeted ads.

As a third alternative, “less personalized ads” is offered, which means Meta uses somewhat less personal data for advertising purposes.

Screenshot from Facebook.

Background in EU legislation

The introduction of the payment model comes after the European Commission in March launched investigations of Meta along with Apple and Google for suspected violations of the DMA (Digital Markets Act). For Meta’s part, the investigation specifically concerns the new payment model.

In April, Meta was fined under DMA legislation and ordered to pay €200 million in fines because the payment model was not considered to meet legal requirements. Meta has appealed the decision.

According to reports from Reuters at the end of June, the social media giant now risks daily penalties if the company doesn’t make necessary changes to its payment model to comply with EU regulations.

The new model represents Meta’s attempt to adapt to stricter European data legislation while the company tries to maintain its advertising revenue through the alternative payment route.

Your data has been stolen – now what?

Why aliases matter, and why deleting yourself from people search sites isn’t enough.

Published 19 July 2025
– By Naomi Brockwell
5 minute read

If you’ve ever used a major tech platform (and let’s be honest—everyone has), your data has been stolen.

That’s not alarmism, that’s just the truth.

If you want to check whether your email or phone number has been involved in any of these breaches, go to HaveIBeenPwned.com. It’s a free tool that scans known data leaks and tells you where and when your information may have been exposed.

But breaches are just the beginning.

What’s often more insidious is how companies you trusted with your information—like your electric company or phone provider—turn around and sell that data. Yes, even your home address. And once it’s sold, there’s no getting it back.

You probably also give your data to companies that promise insights—like ancestry reports, health forecasts, or personality surveys. But behind the feel-good marketing, these platforms are often just data-harvesting operations. Sometimes they’re selling your information outright. Other times, a breach or bankruptcy sends your most sensitive data to the auction block—sold to pharma companies, insurers, or even foreign governments.

Deletion won’t save you

One thing people often try is deleting themselves from people search sites, or opting out of data broker lists. But it’s like playing whack-a-mole. Even if you get your info removed from one site, your bank, phone company, and utility providers are still selling it—so it just pops up again somewhere else.

And here’s the real problem: you can’t rewind the clock. Once your data hits the dark web, it’s out there for good. You can’t recall it. You can’t erase the copies. And if you keep using the same email, phone number, and payment info everywhere, your profile rebuilds itself instantly.

The system is designed to remember you.

What you can do

1) Use aliases

The real solution is to use aliases—unique emails, phone numbers, and payment methods—to make sure breached data can’t be easily correlated. Every alias breaks the link between you and your data trail, making it harder for data brokers to rebuild your profile.

  • Email: Use tools like SimpleLogin or DuckDuckGo Email Protection (powered by SimpleLogin) to auto-generate a unique email address for every account. You’ll still receive everything in one inbox.
  • Phone numbers: Try MySudo or Cloaked to create multiple VoIP numbers—one for work, one for deliveries, one for banking, etc.
  • Payments: Use Privacy.com (US-only) or Revolut (international) to generate burner credit cards and keep your real financial details hidden.

Each alias adds friction for trackers, data brokers, and anyone trying to stitch together your digital life.

2) Clean up old accounts

Your current email and phone number are likely tied to:

  • Old accounts
  • Shopping sites
  • Loyalty programs
  • Health portals
  • Social media
  • Subscription services

You not only need to stop handing over the same identifiers—but you should also go back and replace them anywhere they’ve already been used. Go through your accounts one by one. Update them with new aliases where possible. Delete your home address when it’s not essential. The goal is to scrub your personal info from as many places as possible—so the next breach doesn’t keep exposing the same data.

3) Create new accounts (when needed)

Some services won’t let you fully erase your trail. In those cases, the cleanest option may be to start fresh—with a new account and new aliases—and then delete the old one.

4) Monitor for future leaks

Stay ahead of future breaches by regularly checking what’s already out there.

  • Have I Been Pwned: Enter your email or phone number to see if they’ve appeared in known data leaks. It’s a quick way to know what’s been exposed.
  • IntelTechniques Search Tools: A powerful suite of OSINT tools that shows what others can find out about you online—from addresses to usernames to social accounts.

You gave away your DNA. Now it’s for sale

Millions of people gave 23andMe their DNA—now the company is in Chapter 11 bankruptcy, and that data could be sold to pharma companies, insurers, or even foreign governments. With the business on shaky ground, the idea of your genetic code hitting the open market is chilling. You can’t change your DNA—once it’s out, it’s out forever.

If you’re a 23andMe user, you can still log into your dashboard and:

  • Go to Account → Settings → Delete Data
  • Revoke your research consent
  • Request sample destruction

But there’s no guarantee it’ll be honored. And deletion doesn’t undo exposure.

So how do we avoid this in the future? Most companies quietly include clauses in their Terms of Service allowing them to sell your data in the event of bankruptcy or acquisition. It’s common—but that doesn’t mean it’s harmless. Just because it’s buried in fine print doesn’t mean it’s acceptable.

Before handing over sensitive data, ask yourself: Would I be okay with this information being sold to anyone with enough cash?
If not, it’s worth reconsidering whether the service is worth it.

The 23andMe collapse isn’t an isolated incident—it’s a symptom of a deeper problem. We keep trusting companies with intimate, irreversible data. And time after time, that data ends up somewhere we never agreed to.

Takeaways

Some breaches are just email addresses. Others are everything—your identity, your relationships, even your biology.

And when a company that promised to protect your most personal information collapses, that data doesn’t disappear. It becomes an asset. It’s auctioned. It’s repackaged. It becomes someone else’s opportunity.

That’s the world we’re living in. But you still have options.

You can choose to make your data harder to capture. Harder to link. Harder to weaponize. You can stop recycling identifiers that have already been compromised. You can stop giving out pieces of yourself you can’t get back.

This isn’t about disappearing.

It’s about refusing to be predictable.

Privacy is a discipline—and a form of resistance.

And no matter how much you’ve already given away, you can still choose not to hand over the rest.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.