Sunday, November 9, 2025

Polaris of Enlightenment

Without consent

How parents unknowingly build surveillance files on their children.

Published May 3, 2025 – By Naomi Brockwell

Your child’s first digital footprint isn’t made by them—it’s made by you

What does the future look like for your child?

Before they can even talk, many kids already have a bigger digital footprint than their parents did at 25.

Every ultrasound shared on Facebook.
Every birthday party uploaded to Instagram.
Every proud tweet about a funny thing they said.

Each post seems harmless—until you zoom out and realize you're building a permanent, searchable, biometric dossier on your child, curated by you.

This isn’t fearmongering. It’s the reality of a world where data is forever.
And it’s not just your friends and family who are watching.

Your kid is being profiled before they hit puberty

Here’s the uncomfortable truth:

When you upload baby photos, you’re training facial recognition databases on their face—at every age and stage.

When you post about their interests, health conditions, or behavior, you’re populating detailed profiles that can predict who they might become.

These profiles don’t just sit idle.
They’re analyzed, bought, and sold.

By the time your child applies for a job or stands up for something they believe in, they may already be carrying a hidden score assigned by an algorithm—built on data you posted.

When their childhood data comes back to haunt them

Imagine your child years from now, applying for a travel visa, a job, or just trying to board a flight.

A background check pulls information from facial recognition databases and AI-generated behavior profiles—flagging them for additional scrutiny based on “historic online associations".

They’re pulled aside. Interrogated. Denied entry. Or worse, flagged permanently.

Imagine a future law that flags people based on past “digital risk indicators”—and your child’s online record becomes a barrier to accessing housing, education, or financial services.

Insurance companies can use their profile to label them a risky customer.

Recruiters might quietly filter them out based on years-old digital behavior.

Not because they did something wrong—but because of something you once shared.

Data doesn’t disappear.
Governments change. Laws evolve.
But surveillance infrastructure rarely gets rolled back.

And once your child’s data is out there, it’s out there forever.
Feeding systems you’ll never see.
Controlled by entities you’ll never meet.

For purposes you’ll never fully understand.

The rise of biometric surveillance—and why it targets kids first

Take Discord’s new AI selfie-based age verification. To prove they’re 13+, children are encouraged to submit selfies—feeding sensitive biometric data into AI systems.

You can change your password. You can’t change your face.

And yet, we’re normalizing the idea that kids should hand over their most immutable identifiers just to participate online.

Some schools already collect facial scans for attendance. Some toys use voice assistants that record everything your child says.

Some apps marketed as “parental control” tools grant third-party employees backend access to your child’s texts, locations—even live audio.

Ask yourself: Do you trust every single person at that company with your child’s digital life?

“I know you love me, and would never do anything to harm me...”

In the short film Without Consent, by Deutsche Telekom, a future version of a young girl named Ella speaks directly to her parents. She pleads with them to protect her digital privacy before it’s too late.

She imagines a future where:

  • Her identity is stolen.
  • Her voice is cloned to scam her mom into sending money.
  • Her old family photo is turned into a meme, making her a target of school-wide bullying.
  • Her photos appear on exploitation sites—without her knowledge or consent.

It’s haunting because it’s plausible.

This is the world we’ve built.
And your child’s data trail—your posts—is the foundation.

The most powerful privacy lesson you can teach? How you live online.

Children learn how to navigate the digital world by watching you.

What are you teaching them if you trade their privacy for likes?

The best gift you can give them isn’t a new device—it’s the mindset and tools to protect themselves in a world that profits from their exposure.

Even “kid-safe” tech often betrays that trust.

Baby monitors have leaked footage.

Tracking apps have glitched and exposed locations of random children (yes, really).

Schools collect and store sensitive information with barely any safeguards—and breaches happen all the time.

How to protect your child’s digital future

Stop oversharing
Avoid posting photos, birthdays, locations, or anecdotes about your child online—especially on platforms that monetize engagement.

Ditch spyware apps
Instead of surveillance, foster open dialogue. If monitoring is necessary, choose open-source, self-hosted tools where you control the data—not some faceless company.

Teach consent early
Help your child understand that their body, thoughts, and information are theirs to control. Make digital consent a family value.

Opt out of biometric collection
Say no to tools that demand selfies, facial scans, or fingerprints. Fight back against the normalization of biometric surveillance for kids.

Use aliases and VoIP numbers
When creating accounts for your child, use email aliases and VoIP numbers to avoid linking their real identity across platforms.

Push schools and apps for better policies
Ask your child’s school: What data do they collect? Who has access? Is it encrypted?
Push back on apps that demand unnecessary permissions. Ask hard questions.

This isn’t paranoia—it’s parenting in the digital age

This is about protecting your child’s right to grow up without being boxed in by their digital past.

About giving them the freedom to explore ideas, try on identities, and make mistakes—without it becoming a permanent record.

Privacy is protection.
It’s dignity.
It’s autonomy.

And it’s your job to help your child keep it.
Let’s give the next generation a chance to write their own story.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Email was never built for privacy

Mass surveillance

How Proton makes email privacy simple.

Published yesterday 8:16 am – By Naomi Brockwell

Email was never built for privacy. It’s closer to a digital postcard than a sealed letter, bouncing through and sitting on servers you don’t control, and mainstream providers like Gmail read and analyze everything that is inside.

Email isn’t going anywhere in our society, it’s baked into how the digital world communicates. But luckily there are ways to make your emails more private. One tool that you can use is PGP, which stands for “Pretty Good Privacy”.

PGP is one of the oldest and most powerful tools for email privacy. It takes your message and locks it with the recipient’s public key, so only they can unlock it with their private key. That means even if someone intercepts the email, whether it’s a hacker, your ISP, or a government agency, they see only scrambled text.

Unfortunately it is notoriously complicated. Normally, you’d have to install command-line tools, generate keys manually, and run cryptic commands just to send an encrypted email.

But Proton Mail makes all of that easy, and builds PGP right into your inbox.

How Proton makes PGP simple

Proton is a great, privacy-focused email provider (and no they’re not sponsoring this newsletter, they’re simply an email provider that I like to use).

If you email someone within the Proton ecosystem (ie send an email from one Proton user to another Proton user), your email is automatically end-to-end encrypted using PGP.

But what if you email someone outside of the Proton ecosystem?

Here’s where it would usually get tricky.

First, you’d need to install a PGP client, which is a program that lets you generate and manage your encryption keys.

Then you’d run command-line prompts, choosing the key type, size, expiration, associating the email you want to use the key with, and you’d export your public key. It’s complicated.

But if you use Proton, they make using PGP super easy.

Let’s go through how to use it.

Automatic search for public PGP key

First of all, when you type an email address into the “To” field in Proton Mail, it automatically searches for a public PGP key associated with that address. Proton checks its own network, your contact list, and Web Key Directory (WKD) on the associated email domain.

WKD is a small web‑standard that allows someone to publish their public key at their domain in a way that makes it easily findable for an email app. For example if Proton finds a key for a certain address at the associated domain, Proton will automatically encrypt a message with it.

If they find a key, you’ll see a green lock next to the recipient in the ‘To’ field, indicating the message will be encrypted.

You don’t need to copy, paste, or import anything. It just works.

Great, your email has been automatically encrypted using PGP, and only the recipient of the email will be able to use their private key to decrypt it.

Manually uploading someone’s PGP key

What if Proton doesn’t automatically find someone’s PGP key? You can hunt down the key manually and import it. Some people will have their key available on their website, either in plain text, or as a .asc file. Proton allows you to save this PGP key in your contacts.

To add one manually, first you type their email address in the “to” field.

Then right-click on that address, and select “view contact details”

Then click the settings wheel to go to email settings, and select “show advanced PGP settings”

Under “public keys”, select “upload” and upload their public key in an .asc format.

Once the key is uploaded, the “encrypt emails” toggle will automatically switch on, and all future emails to that contact will automatically be protected with PGP. You can turn that off at any time, and also remove or replace the public key.

How do others secure emails to you using PGP?

Super! So you’ve sent an encrypted email to someone using their PGP key. What if they want to send you an email back, will that be automatically end-to-end encrypted (E2EE) using PGP? Not necessarily.

In order for someone to send you an end-to-end encrypted email, they need your public PGP key.

Download your public-private key pair inside Proton

Proton automatically generates a public-private key pair for each address that you have configured inside Proton Mail, and manages encryption inside its own network.

If you want people outside Proton to be able to encrypt messages to you, the first step is to export your public key from your Proton account so you can share it with them.

To do this:

  • Go to Setting
  • Click “All settings”
  • Select “encryption and keys”
  • Under “email encryption keys” you’ll have a dropdown menu of all your email addresses associated with your Proton account. Select the address that you want to export the public key for.
  • Under the “action” column, click “export public key”

It will download as an .asc file, and ask you where you want to save the file.

Normally a PGP key is written in 1s and 0s that your computer can read. The .asc file takes that key and wraps it in readable characters, and it ends up in a format that looks something like this:

Sharing your public key

Now that you’ve downloaded the public key, how do you share it with people so that they can contact you privately? There are several ways.

For @proton.me and @protonmail.com addresses, Proton publishes your public key in its WKD automatically. You don’t have to do anything.

For custom domains configured in Proton Mail, Proton doesn’t host WKD for you. You can publish WKD yourself on your own domain by serving it at a special path on your website. Or you can delegate WKD to a managed service. Or if you don’t want to use WKD at all, you can upload your key to a public keyserver like keys.openpgp.org, which provides another way for mail apps to discover it.

We’re not going to cover those setups in this article. Instead here are simpler ways to share your public key:

1) You can send people your .asc file directly if you want them to be able to encrypt emails to you (be sure to let them know which email address is associated with this key), or you can host this .asc file on your website for people to download.

2) You can open the .asc file in a text editor and copy and paste the key, and then send people this text, or upload the text on your website. This is what I have done:

This way if anyone wants to send me an email more privately, they can do so.

But Proton makes it even easier to share your PGP key: you can opt to automatically attach your public key to every email.

To turn this on:

  1. Go to Settings → Encryption & keys → External PGP settings
  2. Enable
    • Sign external messages
    • Attach public key

Once this is on, every email you send will automatically include your public key file, as a small .asc text file.

This means anyone using a PGP-capable mail client (like Thunderbird, Mailvelope, etc.) can import it immediately, with no manual steps required.

Password-protected emails

Proton also lets you send password-protected emails, so even if the other person doesn’t use PGP you can still keep the contents private. This isn’t PGP -- Proton encrypts the message and attachments in your browser and the recipient gets a link to a secure viewing page. They enter a password you share separately to open it. Their provider (like Gmail) only sees a notification email with a link, not the message itself. You can add a password hint, and the message expires after a set time (28 days by default).

The bottom line

Email privacy doesn’t have to be painful. Proton hides the complexity by adding a password option, or automating a lot of the PGP process for you: it automatically looks up recipients’ keys, encrypts your messages, and makes your key easy for others to use when they reply.

As Phil Zimmermann, the creator of PGP, explained in Why I Wrote PGP:

“PGP empowers people to take their privacy into their own hands. There has been a growing social need for it. That’s why I wrote it".

We’re honored to have Mr. Zimmermann on our board of advisors at Ludlow Institute.

Pioneers like him fought hard so we could protect our privacy. It’s on us to use the tools they gave us.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Swedish police secretly using Palantir’s surveillance system for years

Mass surveillance

Published November 4, 2025 – By Editorial staff
Palantir Technologies headquarters in Silicon Valley.

The Swedish Police Authority has for at least five years been using an AI-based analysis tool from the notorious American security company Palantir.

The program, which has been specially adapted for Swedish conditions, can within seconds compile comprehensive profiles of individuals by combining data from various registers.

Behind the system stands the American tech company Palantir, which is internationally controversial and has been accused of involvement in surveillance activities. This summer, the company was identified in a UN report as complicit in genocide in Gaza.

The Swedish version of Palantir's Gotham platform is called Acus and uses artificial intelligence to compile, analyze and visualize large amounts of information. According to an investigation by the left-wing newspaper Dagens ETC, investigators using the system can quickly obtain detailed personal profiles that combine data from surveillance and criminal registers with information from Bank-id (Sweden's national digital identification system), mobile operators and social media.

A former analyst employed by the police, who chooses to remain anonymous, describes to the newspaper how the system was surrounded by great secrecy:

— There was very much hush-hush around that program.

Rejection of document requests

When the newspaper requested information about the system and how it is used, they were met with rejection. The Swedish Police Authority cited confidentiality and stated that they can neither "confirm nor deny relationships with Palantir" citing "danger to national security".

This is not the first time Palantir's tools have been used in Swedish law enforcement. In the high-profile Operation Trojan Shield, the FBI, with support from Palantir's technology, managed to infiltrate and intercept the encrypted messaging app Anom.

The operation led to the arrest of a large number of people connected to serious crime, both in Sweden and internationally. The FBI called the operation "a shining example of innovative law enforcement".

But the method has also received criticism. Attorney Johan Grahn, who has represented defendants in several Anom-related cases, is critical of the approach.

— In these cases, it has been indiscriminate mass surveillance, he states.

Mapping dissidents

Palantir has long sparked debate due to its assignments and methods. The company works with both American agencies and foreign security services.

In the United States, the surveillance company's systems are used to map undocumented immigrants. In the United Kingdom, British police have been criticized for using the company's technology to build registers of citizens' sex lives, political views, religious affiliation, ethnicity and union involvement – information that according to observers violates fundamental privacy principles.

This summer, a UN report also identified Palantir as co-responsible for acts of genocide in Gaza, after the company's analysis tools were allegedly used in attacks where Palestinian civilians were killed.

How extensive the Swedish police's use of the system is, and what legal frameworks govern the handling of Swedish citizens' personal data in the platform, remains unclear as long as the Swedish Police Authority chooses to keep the information classified.

OpenAI shifts from Microsoft to Amazon in new partnership

Published November 4, 2025 – By Editorial staff
OpenAI logo. The AI company has signed a seven-year cloud agreement with Amazon Web Services worth $38 billion.

AI company OpenAI has entered into a comprehensive agreement with Amazon Web Services worth $38 billion. The deal marks a clear step away from Microsoft's previous monopoly position as cloud provider.

According to the agreement announced on Monday, OpenAI gains immediate access to hundreds of thousands of graphics cards from Nvidia in American data centers.

Scaling frontier AI requires massive, reliable compute. Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone, says OpenAI CEO Sam Altman to CNBC.

Amazon's stock closed four percent higher on Monday and reached a record high closing value. Over the past two trading days, the e-commerce giant has risen 14 percent, the best two-day period since November 2022.

Seven-year agreement

Until this year, OpenAI had an exclusive cloud agreement with Microsoft, which first backed the company in 2019 and has invested a total of $13 billion. In January, Microsoft announced that they would no longer be the exclusive cloud provider, but instead have right of first refusal on new requests.

Last week, Microsoft's preferential status expired according to renegotiated commercial terms, which freed OpenAI to collaborate more broadly with other major cloud providers. However, OpenAI will continue to spend large sums with Microsoft, which was confirmed last week when the company announced they will purchase additional Azure services for $250 billion.

For Amazon, the agreement is significant both in size and scope, but also somewhat sensitive since the cloud giant has close ties with OpenAI's rival Anthropic. Amazon has invested billions of dollars in Anthropic and is currently building an $11 billion data center in Indiana that is designed exclusively for Anthropic.

The agreement between OpenAI and Amazon is valid for seven years, but at present, no plans beyond 2026 have been finalized.

IT expert warns: ID requirements online bring us closer to totalitarian surveillance

Mass surveillance

Published November 3, 2025 – By Editorial staff
Swedish Liberal Party politician Nina Larsson wants to introduce age verification – but IT experts warn of serious consequences

IT security specialist Karl Emil Nikka advises Sweden against following the UK's example of mandatory age verification on pornographic websites. The risk of data breaches and increased surveillance is too great, he argues.

Swedish Gender Equality Minister Nina Larsson wants Sweden to introduce technical barriers requiring age verification on pornographic websites to protect children from explicit sexual content.

The proposal is based on the British model where websites must verify users' age or identity, for example through authentication with ID cards or credit cards.

But Karl Emil Nikka, an IT security specialist, is strongly critical of the proposal. He points to serious flaws in the British solution, not least the risk of data breaches.

As an example, he mentions the leak from the messaging platform Discord, where photos of 70,000 users ended up in the wrong hands after a cyberattack in connection with the law change. Additionally, the barriers are easy to circumvent using VPN services, which caused the use of such services to skyrocket when the British law came into effect.

Risks surveillance

Nikka also warns that requirements for online identification bring Sweden closer to a type of surveillance that otherwise only exists in totalitarian states.

— It's a small problem as long as we live in a democracy, but it's damn dangerous to believe we always will, he says.

Instead, parents should be encouraged to use the controls already built into phones and other devices, where one can easily choose which sites to block.

— From a security perspective, it's the only reasonable solution, Nikka states.

Foreign sites attract

An additional risk with technical barriers is that young users turn to lesser-known foreign sites that don't care about legal requirements, Nikka argues. Jannike Tillå, head of communications and social benefit at the Swedish Internet Foundation, confirms this picture.

— According to experts in various countries, it seems that people have turned to other lesser-known websites abroad, she says.

However, Tillå believes that technical solutions can have a place, provided they are more anonymous than the British ones and combined with other measures.

— It can help raise thresholds and reduce exposure.

Conversations crucial

At the same time, she emphasizes the importance of complementing any technical solutions with investments in digital literacy and, above all, conversations between parents and children.

— That's where real protection begins. We know that many parents find it difficult to have the porn conversation, but you should do it early, says Jannike Tillå.

She stresses that the question of privacy and freedom online must not be set against child protection.

— We must find that balance and manage both things, she concludes.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.