Saturday, June 28, 2025

Polaris of Enlightenment

Connected TVs – “A Trojan Horse” in the home

Published 3 November 2024
– By Editorial Staff
It is often not easy to understand how companies collect data from or track their users.
2 minute read

The companies behind the streaming industry, including manufacturers of connected TVs, streaming sticks and streaming service providers, have become a “privacy nightmare”, according to a report by the US-based Center for Digital Democracy (CDD).

The report, How TV Watches Us: Commercial Surveillance in the Streaming Era, provides a detailed overview of streaming services and streaming hardware. It describes how companies such as Samsung and LG have developed increasingly sophisticated technologies to collect user data, which is then used to produce targeted advertising.

The organization points out that today’s connected TVs (CTVs) and streaming sticks are “surveillance systems” that have “long undermined privacy and consumer protection”. Jeffrey Chester, co-author of the report and CDD’s executive director, describes it as a “privacy nightmare”.

– Not only does CTV operate in ways that are unfair to consumers, it is also putting them and their families at risk as it gathers and uses sensitive data about health, children, race, and political interests, Chester said in a statement according to Ars Technica.

Misleading privacy policies

Beyond the rising subscription costs of streaming and the increasing prevalence of ads in the services, the growth of streaming comes at another “steep price”, according to the report. Among other things, the prevalence of “misleading” privacy policies is high, with minimal information available about, for example, companies’ data collection and tracking practices.

Buying a smart TV set in today’s connected television marketplace is akin to bringing a digital Trojan Horse into one’s home“, the report says.

In connection with the report, CDD has sent letters to various US authorities, calling for stronger regulation of how data can be collected and used.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Your TV is spying on you

Your TV is taking snapshots of everything you watch.

Published today 8:14
– By Naomi Brockwell
6 minute read

You sit down to relax, put on your favorite show, and settle in for a night of binge-watching. But while you’re watching your TV… your TV is watching you.

Smart TVs take constant snapshots of everything you watch. Sometimes hundreds of snapshots a second.

Welcome to the future of “entertainment”.

What’s actually happening behind the screens?

Smart TVs are just modern TVs. It’s almost impossible to buy a non-smart TV anymore. And they’re basically just oversized internet-connected computers. They come preloaded with apps like Amazon Prime Video, YouTube, and Hulu.

They also come preloaded with surveillance.

recent study from UC Davis researchers tested TVs from Samsung and LG, two of the biggest players in the market, and came across something known as ACR: Automatic Content Recognition.

What is ACR and why should you care?

ACR is a surveillance technology built into the operating systems of smart TVs. This system takes continuous snapshots of whatever is playing to identify exactly what is on the screen.

LG’s privacy policy states they take a snapshot every 10 milliseconds. That’s 100 per second.
Samsung does it every 500 milliseconds.

From these snapshots, the TV generates a content fingerprint and sends it to the manufacturer. That fingerprint is then matched against a massive database to figure out exactly what you’re watching.

Let that sink in. Your television is taking snapshots of everything you’re watching.

And it doesn’t just apply to shows you’re watching on the TV. Even if you plug in your laptop and use the TV as a dumb monitor, it’s still taking snapshots.

  • Zoom calls
  • Emails
  • Banking apps
  • Personal photos

Audio or video snapshots, or sometimes both, are being collected of all of it.

Currently, the way ACR works, the snapshots themselves are not necessarily sent off-device, but your TV is still collecting them. And we all know that AI is getting better and better. It’s now possible for AI to identify everything in a video or photo: faces, emotions, background details.

As the technology continues to improve, we should presume that TVs will move from fingerprint-based ACR to automatic AI-driven content recognition.

As Toby Lewis from Darktrace told The Guardian:

“Facial recognition, speech-to-text, content analysis—these can all be used together to build an in-depth picture of an individual user”.

This is where we’re headed.

This data doesn’t exist in a vacuum

TV manufacturers don’t just sit on this data. They monetize it.

Viewing habits are combined with data from your other devices: phones, tablets, smart fridges, wearables. Then it’s sold to third parties. Advertisers. Data brokers. Political campaigns.

One study found that almost every TV they tested contacted Netflix servers, even when no Netflix account was configured.

So who’s getting your data?

We don’t know. That’s the point.

How your data gets weaponized

Let’s say your TV learns that:

  • You watch sports every Sunday
  • You binge true crime on weekdays
  • You play YouTube fashion hauls in the afternoons

These habits are then tied to a profile of your IP address, email, and household.

Now imagine that profile combined with:

  • Your Amazon purchase history
  • Your travel patterns
  • Your social media behavior
  • Your voting record

That’s the real goal: total psychological profiling. Knowing not just what you do, but what you’re likely to do. What you’ll buy, how you’ll vote, who you’ll trust.

In other words, your smart TV isn’t just spying.

It’s helping others manipulate you.

Why didn’t I hear about this when I set up my TV?

Because they don’t want you to know.

When TV manufacturers first started doing this, they never informed users. The practice slipped quietly by.

A 2017 FTC lawsuit revealed that Vizio was collecting viewing data from 11 million TVs and selling it without ever getting user consent.

These days, companies technically include “disclosures” in their Terms of Service. But they’re buried under vague names like:

  • “Viewing Information Services”
  • “Live Plus”
  • “Personalized Experiences”

Have you ever actually read those menus? Didn’t think so.

These aren’t written to inform you. They’re written to shield corporations from lawsuits.

If users actually understood what was happening, many would opt out entirely. But the system is designed to confuse and hide from you the truth that surveillance devices entered our living rooms and bedrooms without us realizing.

Researchers are being silenced

Not only are these systems intentionally opaque and confusing, companies design them to discourage scrutiny.

And when researchers try to investigate these systems, they hit two major roadblocks:

  1. Technical – Jailbreaking modern Smart TVs is nearly impossible. Their systems are locked down, and the code is proprietary.
  2. Legal – Researchers who attempt to reverse-engineer modern-day tech risk being sued under the Computer Fraud and Abuse Act (CFAA), a vague and outdated law that doesn’t distinguish between malicious actors and researchers trying to inform the public.

As a result, most of what we know about these TVs comes from inference. Guessing what’s happening by watching network traffic, since direct access is often blocked.

That means most of this surveillance happens in the dark. Unchallenged, unverified, and largely unnoticed.

We need stronger protections for privacy researchers, clearer disclosures for users, and real pressure on companies to stop hiding behind complexity.

Because if we can’t see what the tech is doing, we can’t choose to opt out.

What you can do

Here are the most effective steps you can take to protect your privacy:

1. Don’t connect your TV to the internet.
If you keep the Wi-Fi off, the TV can’t send data to manufacturers or advertisers. Use a laptop or trusted device for streaming instead. If the TV stays offline forever, the data it collects never leaves the device.

2. Turn off ACR settings.
Dig through the menus and disable everything related to viewing info, advertising, and personalization. Look for settings like “Live Plus” or “Viewing Information Services.” Be thorough. These options are often buried.

3. Use dumb displays.
It’s almost impossible to buy a non-smart TV today. The market is flooded with “smart” everything. But a few dumb projectors still exist. And some monitors are safer too, though they don’t go to TV sizes yet.

4. Be vocal.
Ask hard questions when buying devices. Demand that manufacturers disclose how they use your data. Let them know that privacy matters to you.

5. Push for CFAA reform.
The CFAA is being weaponized to silence researchers who try to expose surveillance. If we want to understand how our tech works, researchers must be protected, not punished. We need to fight back against these chilling effects and support organizations doing this work.

The Ludlow Institute is now funding researchers who reverse-engineer surveillance tech. If you’re a researcher, or want to support one, get in touch.

This is just one piece of the puzzle

Smart TVs are among the most aggressive tracking devices in your home. But they’re not alone. Nearly every “smart” device has the same capabilities to build a profile on you: phones, thermostats, lightbulbs, doorbells, fridges.

This surveillance has been normalized. But it’s not normal.

We shouldn’t have let faceless corporations and governments into our bedrooms and living rooms. But now that they’re here, we have to push back.

That starts with awareness. Then it’s up to us to make better choices and help others do the same.

Let’s take back our homes.
Let’s stop normalizing surveillance.

Because privacy isn’t extreme.
It’s common sense.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Tech giants’ executives become US military officers – gain power over future warfare

The future of AI

Published 26 June 2025
– By Editorial Staff
Data from platforms such as Facebook, Instagram, and WhatsApp could soon be linked to the Swedish military's surveillance systems, according to TBOT (the Swedish Armed Forces' Cyber Defense Unit).
3 minute read

Four senior executives from tech giants Meta, Palantir, and OpenAI have recently been sworn into the US Army Reserve with the rank of lieutenant colonel – an officer rank that normally requires over 20 years of active military service.

The group is part of a new initiative called Detachment 201, aimed at transforming the American military by integrating advanced technologies such as drones, robotics, augmented reality (AR), and AI support.

The new recruits are:

Shyam Sankar, Chief Technology Officer (CTO) of Palantir

Andrew Bosworth, Chief Technology Officer of Meta

Kevin Weil, Chief Product Officer (CPO) of OpenAI

Bob McGrew, former Research Director at OpenAI

According to the technology platform Take Back Our Tech (TBOT), which monitors these developments, these are not symbolic appointments.

“These aren’t random picks. They’re intentional and bring representation and collaboration from the highest level of these companies”, writes founder Hakeem Anwar.

Meta and Palantir on the battlefield

Although the newly appointed officers must formally undergo physical training and weapons instruction, they are expected to participate primarily in digital defense. Their mission is to help the army adapt to a new form of warfare where technology takes center stage.

“The battlefield is truly transforming and so is the government”, notes Anwar.

According to Anwar, the recruitment of Palantir’s CTO could mean the military will start using the company’s Gotham platform as standard. Gotham is a digital interface that collects intelligence and monitors targets through satellite imagery and video feeds.

Meta’s CTO is expected to contribute to integrating data from platforms like Facebook, Instagram, and WhatsApp, which according to TBOT could be connected to military surveillance systems. These platforms are used by billions of people worldwide and contain vast amounts of movement, communication, and behavioral data.

“The activities, movements, and communications from these apps could be integrated into this surveillance network”, writes Anwar, adding:

“It’s no wonder why countries opposed to the US like China have been banning Meta products”.

Leaked project reveals AI initiative for entire government apparatus

Regarding OpenAI’s role, Anwar suggests that Kevin Weil and Bob McGrew might design an AI interface for the army, where soldiers would have access to AI chatbots to support strategy and field tactics.

As Detachment 201 becomes public, a separate AI initiative within the US government has leaked. The website ai.gov, still under development, reveals a plan to equip the entire federal administration with AI tools – from code assistants to AI chatbots for internal use.

TBOT notes that the initiative relies on AI models from OpenAI, Google, and Anthropic. The project is led by the General Services Administration, under former Tesla engineer Thomas Shedd, who has also been involved in the cryptocurrency project DOGE.

“The irony? The website itself was leaked during development, demonstrating that AI isn’t foolproof and can’t replace human expertise”, comments Anwar.

According to the tech site’s founder, several federal employees are critical of the initiative, concerned about insufficient safeguards.

“Without proper safeguards, diving head first into AI could create new security vulnerabilities, disrupt operations, and further erode privacy”, he writes.

Deepfakes are getting scary good

Why your next “urgent” call or ad might be an AI scam.

Published 22 June 2025
– By Naomi Brockwell
4 minute read

This week I watched a compilation of video clips that looked absolutely real. Police officers, bank managers, disaster relief workers, product endorsements… but every single one was generated by AI. None of the people, voices, or backdrops ever existed.

It’s fascinating… and chilling. Because the potential for misuse is growing fast, and most people aren’t ready.

This same technology already works in real time. Someone can join a Zoom call, flip a switch, and suddenly look and sound like your boss, your spouse, or your favorite celebrity. That opens the door to a new generation of scams, and people everywhere are falling for them.

The old scripts, supercharged by AI

“Ma’am, I’m seeing multiple threats on your computer. I need remote access right now to secure your files”.
Tech-support scams used to rely on a shaky phone line and a thick accent. Now an AI voice clone mimics a calm AppleCare rep, shares a fake malware alert, and convinces you to install remote-control software. One click later, they’re digging through your files and draining your bank account.

“We’ve detected suspicious activity on your account. Please verify your login”.
Phishing emails are old news. But now people are getting FaceTime calls that look like their bank manager. The cloned face reads off the last four digits of your card, then asks you to confirm the rest. That’s all they need.

“Miracle hearing aids are only $14.99 today. Click to order”.
Fake doctors in lab coats (generated by AI) are popping up in ads, selling junk gadgets. The product either never arrives, or the site skims your card info.

“We just need your Medicare number to update your benefits for this year.”
Seniors are being targeted with robocalls that splice in their grandchild’s real voice. Once the scammer gets your Medicare ID, they start billing for fake procedures that mess up your records.

“Congratulations, you’ve won $1,000,000! Just pay the small claiming fee today”.
Add a fake newscaster to an old lottery scam, and suddenly it feels real. Victims rush to “claim their prize” and wire the fee… straight to a fraudster.

“We’re raising funds for a sick parishioner—can you grab some Apple gift cards?”
Community members are seeing AI-generated videos of their own pastor asking for help. Once the card numbers are sent, they’re gone.

“Can you believe these concert tickets are so cheap?”
AI-generated influencers post about crazy ticket deals. Victims buy, receive a QR code, and show up at the venue, only to find the code has already been used.

“Help our disaster-relief effort.”
Hours after a real hurricane or earthquake, fake charity appeals start circulating. The links look urgent and heartfelt, and route donations to crypto wallets that vanish.

Why we fall for it and what to watch out for

High pressure
Every scammer plays the same four notes: fear, urgency, greed, and empathy. They hit you with a problem that feels like an emergency, offer a reward too good to miss, or ask for help in a moment of vulnerability. These scams only work if you rush. That’s their weak spot. If something feels high-pressure, pause. Don’t make decisions in panic. You can always ask someone you trust for advice.

Borrowed credibility
Deepfakes hijack your instincts. When someone looks and sounds like your boss, your parent, or a celebrity, your brain wants to trust them. But just because you recognize the face doesn’t mean it’s real. Don’t assume a familiar voice or face is proof of identity. Synthetic media can be convincing enough to fool even close friends.

Trusted platforms become delivery trucks
We tend to relax when something comes through a trusted source — like a Zoom call, a blue-check account, or an ad on a mainstream site. But scammers exploit that trust. Just because something shows up on a legitimate platform doesn’t mean it’s safe. The platform’s credibility rubs off on the fake.

Deepfakes aren’t just a technology problem, they’re a human one. For most of history, our eyes and ears were reliable lie detectors. Now, that shortcut is broken. And until our instincts catch up, skepticism is your best defense.

How to stay one step ahead

  1. Slow the game down.
    Scammers rely on speed. Hang up, close the tab, take a breath. If it’s real, it’ll still be there in five minutes.
  2. Verify on a second channel.
    If your “bank” or “boss” calls, reach out using a number or app you already trust. Don’t rely on the contact info they provide.
  3. Lock down big moves.
    Use two-factor authentication, passphrases, or code words for any important accounts or transactions.
  4. Educate your circle.
    Most deepfake losses happen because someone else panicked. Talk to your family, especially seniors. Share this newsletter. Report fake ads. Keep each other sharp.

Many of these scams fall apart the moment you stop and think. The goal is always the same: get you to act fast. But you don’t have to play along.

Stay calm. Stay sharp. Stay skeptical.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Tech company bankrupt – “advanced AI” was 700 Indians

Published 14 June 2025
– By Editorial Staff
“AI washing” refers to a company exaggerating or lying about their products or services being powered by advanced artificial intelligence in order to attract investors and customers.
2 minute read

An AI company that marketed itself as a technological pioneer – and attracted investments from Microsoft, among others – has gone bankrupt. In the aftermath, it has been revealed that the technology was largely based on human labor, despite promises of advanced artificial intelligence.

Builder.ai, a British startup formerly known as Engineer.ai, claimed that their AI assistant Natasha could build apps as easily as ordering pizza. But as early as 2019, the Wall Street Journal revealed that much of the coding was actually done manually by a total of about 700 programmers in India.

Despite the allegations, Builder.ai secured over $450 million in funding from investors such as Microsoft, Qatar Investment Authority, IFC, and SoftBank’s DeepCore. At its peak, the company was valued at $1.5 billion.

In May 2025, founder and CEO Sachin Dev Duggal stepped down from his position, and when the new management took over, it emerged that the revelations made in 2019 were only the tip of the iceberg. For example, the company had reported revenues of $220 million in 2024, while the actual figures were $55 million. Furthermore, the company is suspected of inflating the figures through circular transactions and fake sales via “third-party resellers”, reports the Financial Times.

Following the new revelations, lenders froze the company’s account, forcing Builder.ai into bankruptcy. The company is now accused of so-called AI washing, which means that a company exaggerates or falsely claims that its products or services are powered by advanced artificial intelligence in order to attract investors and customers.

The company’s heavy promotion of “Natasha” as a revolutionary AI solution turned out to be a facade – behind the deceptive marketing ploy lay traditional, human-driven work and financial irregularities.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.