Wednesday, July 2, 2025

Polaris of Enlightenment

The most dangerous thing in your browser

The dark side of browser extensions.

Published 26 April 2025
– By Naomi Brockwell
6 minute read
You’re browsing the web, trying to make life a little easier. Maybe you install an extension to block annoying popups, write better emails, or even just save a few bucks with coupon codes.

Seems harmless, right?

Extensions are way more permissive and dangerous than people realize.

They might be spying on you, logging your browsing history, injecting malicious code, even stealing your passwords and cookies – all without you even realizing it.

Let’s talk about the dark side of browser extensions. Because once you see what they’re capable of, you might think twice before installing another one.

Real-world attacks: From spyware to crypto theft

This isn’t a “worst-case scenario”. It’s already happening.

  • North Korean hackers have used malicious browser extensions to spy on inboxes and exfiltrate sensitive emails.
  • The DataSpii scandal exposed the private data of over 4 million users—collected and sold by innocent-looking productivity tools.
  • Mega.nz, a privacy-respecting file storage service, had its Chrome extension hacked. Malicious code was pushed to users, silently stealing passwords and crypto wallet keys. It took them four hours to catch it—more than enough time for real damage.
  • Cyberhaven, a cybersecurity company, was breached in late 2024. Their extension was hijacked and used to scrape cookies, session tokens, and authentication credentials—compromising over 400,000 users.

How is this even allowed to happen?

  1. Extensions can silently update themselves. The code running on your device can change at any time—without your knowledge or approval.
  2. Permissions are ridiculously broad. Even if a malicious extension has the same permissions as a good one, it can abuse them in ways the browser can’t distinguish. Once you grant access, it’s basically an honor system.
  3. Extensions can’t monitor each other. If you think that installing a malware-blocking extension is going to protect you, think again. Your defense extensions have no way of knowing what your other extensions are up to. Malicious ones can lurk undetected, even alongside security tools.

A Shadow market for extensions

Extensions aren’t just targets for hackers—they’re targets for buyers. Once an extension gets popular, developers often start getting flooded with offers to sell. And because extensions can silently update, a change in ownership can mean a complete change in behavior—without you ever knowing.

Got an extension with 2 million Facebook users? Buy it, slip in some malicious code, and suddenly you’re siphoning data from 2 million people.

There are entire marketplaces for buying and selling browser extensions—and a thriving underground market too.

Take The Great Suspender, for example. It started as a widely trusted tool that saved memory by suspending unused tabs. Then the developer quietly sold it. The new owner injected spyware, turning it into a surveillance tool. Millions of users were compromised before it was finally flagged and removed.

The danger is in the permissions

One of the biggest challenges? Malicious extensions often ask for the same permissions as good ones. So it’s helpful to understand exactly what each permission is capable of, so that you realize how vulnerable it could make you in the wrong hands.

We spoke to Matt Frisbie, author of Building Browser Extensions, to explain the capabilities of some of these permissions:

Browsing history

Matt Frisbie:

The browser will happily dump out your history as an array.

The browsing history permission grants full access to every site you visit—URLs, timestamps, and frequency. This can help build out a detailed profile on you.

Cookies

The cookie permission exposes your browser’s cookies—including authentication tokens. That means a malicious extension can impersonate you and access your accounts without needing a password or 2FA.

Matt Frisbie:

“If someone steals your cookies, they can pretend to be you in all sorts of nasty ways.”

This is exactly how Linus Tech Tips had their YouTube account hijacked.

Screen capture

Allows extensions to take screenshots of what you’re viewing. Some types trigger a popup, but tab capture does not—it silently records the visible browser tab, even sensitive pages like banking or crypto dashboards.

Matt Frisbie:

“It just takes a screengrab and sends it off, and you will never know what’s happening.”

Web requests

This lets the extension monitor all your browser’s traffic, including data sent to and from websites. Even if the data is being sent over HTTPS, to the extension it’s all in the clear. They can read form data, credit card details, everything.

Matt Frisbie:

“It’s basically a man-in-the-middle… I can see what you’re sending to stripe.com—even if their security is immaculate.”

Web navigation

Provides a live feed of your browsing behavior—what pages you visit, how you get there, and when.

Keystroke logging

Records everything you type—searches, passwords, messages—without needing any special permissions. All it takes is a content script, which runs invisibly on websites.

Matt Frisbie:

“It’s incredibly dangerous and very easy to do.”

Input capture

Watches for changes in form fields, allowing extensions to steal autofilled passwords or credit card numbers—even if you don’t type anything.

Matt Frisbie:

“Anytime an input changes—login box, search bar, credit card entry—this extension can capture what’s changed.”

Geolocation

Extensions can’t access your location in the background. But they can render a user interface—like a popup window—and collect your location when you interact with it. If you’ve granted the extension geolocation permission, it can capture your location every time you open that popup.

Even sneakier? Extensions can piggyback off websites that already have location access. If you’ve allowed a site like maps.google.com or hulu.com to use your location, an extension running on that site can silently grab it—no popup required.

Matt Frisbie:

“If the user goes to maps.google.com and they’ve previously said maps.google.com can read my location… then the extension can piggyback on that and grab their location. No pop-ups generated.”

Other Piggybacking

If you’ve granted a site permission—like location, notifications, or potentially even camera and microphone—an extension running on that same site can sometimes piggyback off that access and silently collect the same data.

Matt Frisbie:

“It is actually possible to piggyback off the page’s permissions. … It really shouldn’t work that way.”

So… How Do You Protect Yourself?

Here are some smart rules to follow:

  • Understand permissions
    Know what you’re granting access to, and what that permission might be capable of.
  • Be careful granting any permissions
    Whether it’s a browser setting, a site request, or an extension prompt, even a single permission can open the door to surveillance.
  • Use extensions sparingly
    The more extensions you install, the larger your attack surface—and the more unique your browser fingerprint becomes.
  • Use a privacy-first browser instead
    Browsers like Brave build privacy protections—like ad and tracker blocking—directly into the browser itself, so you don’t need extensions just to stay private.
  • Follow the principle of least privilege
    Only allow an extension to run when you click it, instead of “on all websites.”
  • Use code review tools
    Sites like Extension Total and Secure Annex can help you vet extensions before you install them.

Takeaway

We all want our browser to be faster, cleaner, and more functional. Extensions can help—but they can also turn into powerful surveillance tools. Even a single line of malicious code, slipped in through an update or new owner, can put your most sensitive information at risk.

So before you install that next extension, ask yourself:
Do I really trust this extension not to be hacked, sold, or misused—and is the extra risk worth it?

Stay sharp. Stay private. Stay safe out there.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Spotify fills playlists with fake music – while CEO invests millions in military AI

The future of AI

Published yesterday 15:55
– By Editorial Staff
Spotify CEO Daniel Ek accused of diverting artist royalties to military AI development.
3 minute read

Swedish streaming giant Spotify promotes anonymous pseudo-musicians and computer-generated music to avoid paying royalties to real artists, according to a new book by music journalist Liz Pelly.

Meanwhile, criticism grows against Spotify CEO Daniel Ek, who recently invested over €600 million in a company developing AI technology for future warfare.

In the book Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, Liz Pelly reveals that Spotify has long been running a secret internal program called Perfect Fit Content (PFC). The program creates cheap, generic background music – often called “muzak” – through a network of production companies with ties to Spotify. This music is then placed in Spotify’s popular playlists, often without crediting any real artists.

The program was tested as early as 2010 and is described by Pelly as Spotify’s most profitable strategy since 2017.

“But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which – as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler – the relationship between listener and artist might be severed completely”, Pelly writes.

By 2023, the PFC program controlled hundreds of playlists. More than 150 of them – with names like Deep Focus, Cocktail Jazz, and Morning Stretch – consisted entirely of music produced within PFC.

“Only soulless AI music will remain”

A jazz musician told Pelly that Spotify asked him to create an ambient track for a few hundred dollars as a one-time payment. However, he couldn’t retain the rights to the music. When the track later received millions of plays, he realized he had likely been deceived.

Social media criticism has been harsh. One user writes: “In a few years, only soulless AI music will remain. It’s an easy way to avoid paying royalties to anyone.”

“I deleted Spotify and cancelled my subscription”, comments another.

Spotify has previously faced criticism for similar practices. The Guardian reported in February that the company’s Discovery Mode system allows artists to gain more visibility – but only if they agree to receive 30 percent less payment.

Spotify’s CEO invests in AI for warfare

Meanwhile, CEO Daniel Ek has faced severe criticism for investing over €600 million through his investment firm Prima Materia in the German AI company Helsing. The company develops software for drones, fighter aircraft, submarines, and other military systems.

– The world is being tested in more ways than ever before. That has sped up the timeline. There’s an enormous realisation that it’s really now AI, mass and autonomy that is driving the new battlefield, Ek commented in an interview with Financial Times.

With this investment, Ek has also become chairman of Helsing. The company is working on a project called Centaur, where artificial intelligence will be used to control fighter aircraft.

The criticism was swift. Australian producer Bluescreen explained in an interview with music site Resident Advisor why he chose to leave Spotify – a decision several other music creators have also made.

– War is hell. There’s nothing ethical about it, no matter how you spin it. I also left because it became apparent very quickly that Spotify’s CEO, as all billionaires, only got rich off the exploitation of others.

Competitor chooses different path

Spotify has previously been questioned for its proximity to political power. The company donated $150,000 to Donald Trump’s inauguration fund in 2017 and hosted an exclusive brunch the day before the ceremony.

While Spotify is heavily investing in AI-generated music and voice-controlled DJs, competitor SoundCloud has chosen a different path.

– We do not develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes, explains communications director Marni Greenberg.

– In fact, we implemented technical safeguards, including a ‘no AI’ tag on our site to explicitly prohibit unauthorised use.

FUTO – the obvious choice for privacy-friendly voice and text input on mobile devices

Advertising partnership with Teuton Systems

Ditch Google's input apps and keep what you type and say on your phone.

Published yesterday 12:16
3 minute read

In our series about open, surveillance-free apps, we take a closer look at FUTO Voice Input and FUTO Keyboard – two apps that together challenge the established alternatives for voice input and keyboards on mobile devices. Most smartphone users are accustomed to dictating text using Google or using standard keyboards like Gboard or SwiftKey.

However, few consider that these popular tools often collect what you say and write privately, sending it to tech giants. The FUTO team themselves emphasize that their solution completely eliminates this problem – everything runs locally on the device without any data leaving the phone (offline with no connection requirements).

Here’s what the FUTO apps offer:

  • Privacy focus: FUTO apps run completely offline – no data is sent to the cloud.
  • Full functionality: Swipe typing, text suggestions, autocorrection, and voice-to-text with punctuation – everything works without internet connection (all keyboard functions available offline).
  • High precision: Offline dictation using advanced AI model (OpenAI Whisper) provides fast and accurate transcription (local voice recognition with high accuracy).
  • Multilingual support: Support for many languages and continuous improvements via the open-source community.

FUTO Keyboard

On the keyboard front, FUTO Keyboard impresses by delivering modern convenience without compromising privacy. Unlike conventional keyboards that constantly transmit user data, FUTO requires neither network access nor cloud services – yet it offers features on par with the best.

You can swipe words with your finger across the screen, get relevant text suggestions and automatic spell correction, and customize the theme to your liking – all while the app consistently refuses to send a single keystroke to any external server (all data stays with you). FUTO Keyboard also integrates FUTO Voice Input through a built-in microphone button, allowing ‘speech to text’ to be activated from the same interface.

FUTO Voice Input

For voice input, we have FUTO Voice Input that lets you dictate text directly in apps like messages or notes – completely without internet connection. All processing happens locally using a compact language model, meaning no audio needs to be sent away to become text. According to users who have compared it with Google’s cloud-based solution, FUTO can keep pace and even surpass it in both speed and accurate grammar.

An enthusiastic tester reported that FUTO provided a completely new experience – no delays or strange autocorrections that he previously suffered from with Gboard. This means you can safely speak freely and see the text appear almost immediately, without worrying about unauthorized “listening” on the other end.

Ongoing development and alternatives

Despite FUTO Keyboard being young, it’s already surprisingly capable. The interface feels polished and user-friendly, and the amount of features makes it almost comparable to established alternatives. Currently, text input works excellently in English, while support for smaller languages like Swedish is still being refined. However, development pace is high and the team behind FUTO has announced improvements specifically to autocorrection and expanded language support in upcoming updates. Moreover, global collaboration is encouraged: since the source code is open, engaged developers and users can contribute improvements and new language data to the project.

Among free alternatives, there’s Sayboard, an open source keyboard using Vosk for speech recognition. For pure keyboards, there’s AnySoftKeyboard and FlorisBoard, which are excellent from a privacy perspective but lack some of the advanced features that FUTO offers in one package (especially built-in voice input).

An essential part of the Matrix Phone ecosystem

FUTO Voice Input and Keyboard demonstrate that you can combine the best of both worlds: the convenience of smart text and voice functions, and the security of keeping your data private. For users of Teuton Systems’ Matrix Phone (GrapheneOS phone), these apps come pre-installed as part of the privacy-secure ecosystem. But they’re available to everyone – via Google Play or F-Droid – and constitute a highly recommended switch for anyone who values their privacy in everyday life.

As a tech writer recently put it: you no longer need to choose between functionality and security – with FUTO you get both without compromises.

Swedish regional healthcare app run by chatbot makes serious errors

Published 30 June 2025
– By Editorial Staff
In one documented case, the app classified an elderly man's symptoms as mild - he died the following day.
2 minute read

An AI-based healthcare app used by the Gävleborg Regional Healthcare Authority in Sweden is now under scrutiny following serious assessment errors. In one notable case, an elderly man’s condition was classified as mild – he died the following day.

Healthcare staff are raising alarms about deficiencies deemed to threaten patient safety, and the app is internally described as a “disaster”.

Min vård Gävleborg (My Healthcare Gävleborg) is used when residents seek digital healthcare or call 1177 (Sweden’s national healthcare advice line). A chatbot asks questions to make an initial medical assessment and then refers the patient to an appropriate level of care. However, according to several doctors in the region, the system is not functioning safely enough.

In one documented case, the app classified an elderly man’s symptoms as mild. He died the following day. An incident report shows that the prioritization was incorrect, although it couldn’t be established that this directly caused the death.

In another case, an inmate at the Gävle Correctional Facility sought care for breathing difficulties – but was referred to a chat with a doctor in Ljusdal, instead of being sent to the emergency room.

– She should obviously have been sent to the emergency room, says Elisabeth Månsson Rydén, a doctor in Ljusdal and board member of the Swedish Association of General Medicine in Gävleborg, speaking to the tax-funded SVT.

“Completely insane”

Criticism from healthcare staff is extensive. Several doctors warn that the app underestimates serious symptoms, which could have life-threatening consequences. Meanwhile, there are examples of the opposite – where patients are given too high priority – which risks unnecessarily burdening healthcare services and causing delays for severely ill patients.

– Doctors have expressed in our meetings that Min vård Gävleborg is a disaster. This is completely insane, says Månsson Rydén.

Despite the death incident, Region Gävleborg has chosen not to report the event to either the Health and Social Care Inspectorate (IVO) or the Swedish Medical Products Agency.

– We looked at the case and decided it didn’t need to be reported, says Chief Medical Officer Agneta Larsson.

Other regions have reacted

The app was developed by Platform24, a Swedish company whose digital systems are used in several regions. In Västra Götaland Region, the app was paused after a report showed that three out of ten patients were assessed incorrectly. In Region Östergötland, similar deficiencies have led to a report to the Swedish Medical Products Agency. An investigation is ongoing.

Despite this, Agneta Larsson defends the version used in Gävleborg:

– We have reviewed our own system, and we cannot see these errors.

Platform24 has declined to be interviewed, but in a written response to Swedish Television, the company’s Medical Director Stina Perdahl defends the app’s basic principles.

“For patient safety reasons, the assessment is deliberately designed to be a bit more cautious initially”, it is claimed.

Your TV is spying on you

Your TV is taking snapshots of everything you watch.

Published 28 June 2025
– By Naomi Brockwell
6 minute read

You sit down to relax, put on your favorite show, and settle in for a night of binge-watching. But while you’re watching your TV… your TV is watching you.

Smart TVs take constant snapshots of everything you watch. Sometimes hundreds of snapshots a second.

Welcome to the future of “entertainment”.

What’s actually happening behind the screens?

Smart TVs are just modern TVs. It’s almost impossible to buy a non-smart TV anymore. And they’re basically just oversized internet-connected computers. They come preloaded with apps like Amazon Prime Video, YouTube, and Hulu.

They also come preloaded with surveillance.

recent study from UC Davis researchers tested TVs from Samsung and LG, two of the biggest players in the market, and came across something known as ACR: Automatic Content Recognition.

What is ACR and why should you care?

ACR is a surveillance technology built into the operating systems of smart TVs. This system takes continuous snapshots of whatever is playing to identify exactly what is on the screen.

LG’s privacy policy states they take a snapshot every 10 milliseconds. That’s 100 per second.
Samsung does it every 500 milliseconds.

From these snapshots, the TV generates a content fingerprint and sends it to the manufacturer. That fingerprint is then matched against a massive database to figure out exactly what you’re watching.

Let that sink in. Your television is taking snapshots of everything you’re watching.

And it doesn’t just apply to shows you’re watching on the TV. Even if you plug in your laptop and use the TV as a dumb monitor, it’s still taking snapshots.

  • Zoom calls
  • Emails
  • Banking apps
  • Personal photos

Audio or video snapshots, or sometimes both, are being collected of all of it.

Currently, the way ACR works, the snapshots themselves are not necessarily sent off-device, but your TV is still collecting them. And we all know that AI is getting better and better. It’s now possible for AI to identify everything in a video or photo: faces, emotions, background details.

As the technology continues to improve, we should presume that TVs will move from fingerprint-based ACR to automatic AI-driven content recognition.

As Toby Lewis from Darktrace told The Guardian:

“Facial recognition, speech-to-text, content analysis—these can all be used together to build an in-depth picture of an individual user”.

This is where we’re headed.

This data doesn’t exist in a vacuum

TV manufacturers don’t just sit on this data. They monetize it.

Viewing habits are combined with data from your other devices: phones, tablets, smart fridges, wearables. Then it’s sold to third parties. Advertisers. Data brokers. Political campaigns.

One study found that almost every TV they tested contacted Netflix servers, even when no Netflix account was configured.

So who’s getting your data?

We don’t know. That’s the point.

How your data gets weaponized

Let’s say your TV learns that:

  • You watch sports every Sunday
  • You binge true crime on weekdays
  • You play YouTube fashion hauls in the afternoons

These habits are then tied to a profile of your IP address, email, and household.

Now imagine that profile combined with:

  • Your Amazon purchase history
  • Your travel patterns
  • Your social media behavior
  • Your voting record

That’s the real goal: total psychological profiling. Knowing not just what you do, but what you’re likely to do. What you’ll buy, how you’ll vote, who you’ll trust.

In other words, your smart TV isn’t just spying.

It’s helping others manipulate you.

Why didn’t I hear about this when I set up my TV?

Because they don’t want you to know.

When TV manufacturers first started doing this, they never informed users. The practice slipped quietly by.

A 2017 FTC lawsuit revealed that Vizio was collecting viewing data from 11 million TVs and selling it without ever getting user consent.

These days, companies technically include “disclosures” in their Terms of Service. But they’re buried under vague names like:

  • “Viewing Information Services”
  • “Live Plus”
  • “Personalized Experiences”

Have you ever actually read those menus? Didn’t think so.

These aren’t written to inform you. They’re written to shield corporations from lawsuits.

If users actually understood what was happening, many would opt out entirely. But the system is designed to confuse and hide from you the truth that surveillance devices entered our living rooms and bedrooms without us realizing.

Researchers are being silenced

Not only are these systems intentionally opaque and confusing, companies design them to discourage scrutiny.

And when researchers try to investigate these systems, they hit two major roadblocks:

  1. Technical – Jailbreaking modern Smart TVs is nearly impossible. Their systems are locked down, and the code is proprietary.
  2. Legal – Researchers who attempt to reverse-engineer modern-day tech risk being sued under the Computer Fraud and Abuse Act (CFAA), a vague and outdated law that doesn’t distinguish between malicious actors and researchers trying to inform the public.

As a result, most of what we know about these TVs comes from inference. Guessing what’s happening by watching network traffic, since direct access is often blocked.

That means most of this surveillance happens in the dark. Unchallenged, unverified, and largely unnoticed.

We need stronger protections for privacy researchers, clearer disclosures for users, and real pressure on companies to stop hiding behind complexity.

Because if we can’t see what the tech is doing, we can’t choose to opt out.

What you can do

Here are the most effective steps you can take to protect your privacy:

1. Don’t connect your TV to the internet.
If you keep the Wi-Fi off, the TV can’t send data to manufacturers or advertisers. Use a laptop or trusted device for streaming instead. If the TV stays offline forever, the data it collects never leaves the device.

2. Turn off ACR settings.
Dig through the menus and disable everything related to viewing info, advertising, and personalization. Look for settings like “Live Plus” or “Viewing Information Services.” Be thorough. These options are often buried.

3. Use dumb displays.
It’s almost impossible to buy a non-smart TV today. The market is flooded with “smart” everything. But a few dumb projectors still exist. And some monitors are safer too, though they don’t go to TV sizes yet.

4. Be vocal.
Ask hard questions when buying devices. Demand that manufacturers disclose how they use your data. Let them know that privacy matters to you.

5. Push for CFAA reform.
The CFAA is being weaponized to silence researchers who try to expose surveillance. If we want to understand how our tech works, researchers must be protected, not punished. We need to fight back against these chilling effects and support organizations doing this work.

The Ludlow Institute is now funding researchers who reverse-engineer surveillance tech. If you’re a researcher, or want to support one, get in touch.

This is just one piece of the puzzle

Smart TVs are among the most aggressive tracking devices in your home. But they’re not alone. Nearly every “smart” device has the same capabilities to build a profile on you: phones, thermostats, lightbulbs, doorbells, fridges.

This surveillance has been normalized. But it’s not normal.

We shouldn’t have let faceless corporations and governments into our bedrooms and living rooms. But now that they’re here, we have to push back.

That starts with awareness. Then it’s up to us to make better choices and help others do the same.

Let’s take back our homes.
Let’s stop normalizing surveillance.

Because privacy isn’t extreme.
It’s common sense.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.