Sunday, August 3, 2025

Polaris of Enlightenment

Deepfakes are getting scary good

Why your next “urgent” call or ad might be an AI scam.

Published 22 June 2025
– By Naomi Brockwell
4 minute read

This week I watched a compilation of video clips that looked absolutely real. Police officers, bank managers, disaster relief workers, product endorsements… but every single one was generated by AI. None of the people, voices, or backdrops ever existed.

It’s fascinating… and chilling. Because the potential for misuse is growing fast, and most people aren’t ready.

This same technology already works in real time. Someone can join a Zoom call, flip a switch, and suddenly look and sound like your boss, your spouse, or your favorite celebrity. That opens the door to a new generation of scams, and people everywhere are falling for them.

The old scripts, supercharged by AI

“Ma’am, I’m seeing multiple threats on your computer. I need remote access right now to secure your files”.
Tech-support scams used to rely on a shaky phone line and a thick accent. Now an AI voice clone mimics a calm AppleCare rep, shares a fake malware alert, and convinces you to install remote-control software. One click later, they’re digging through your files and draining your bank account.

“We’ve detected suspicious activity on your account. Please verify your login”.
Phishing emails are old news. But now people are getting FaceTime calls that look like their bank manager. The cloned face reads off the last four digits of your card, then asks you to confirm the rest. That’s all they need.

“Miracle hearing aids are only $14.99 today. Click to order”.
Fake doctors in lab coats (generated by AI) are popping up in ads, selling junk gadgets. The product either never arrives, or the site skims your card info.

“We just need your Medicare number to update your benefits for this year.”
Seniors are being targeted with robocalls that splice in their grandchild’s real voice. Once the scammer gets your Medicare ID, they start billing for fake procedures that mess up your records.

“Congratulations, you’ve won $1,000,000! Just pay the small claiming fee today”.
Add a fake newscaster to an old lottery scam, and suddenly it feels real. Victims rush to “claim their prize” and wire the fee… straight to a fraudster.

“We’re raising funds for a sick parishioner—can you grab some Apple gift cards?”
Community members are seeing AI-generated videos of their own pastor asking for help. Once the card numbers are sent, they’re gone.

“Can you believe these concert tickets are so cheap?”
AI-generated influencers post about crazy ticket deals. Victims buy, receive a QR code, and show up at the venue, only to find the code has already been used.

“Help our disaster-relief effort.”
Hours after a real hurricane or earthquake, fake charity appeals start circulating. The links look urgent and heartfelt, and route donations to crypto wallets that vanish.

Why we fall for it and what to watch out for

High pressure
Every scammer plays the same four notes: fear, urgency, greed, and empathy. They hit you with a problem that feels like an emergency, offer a reward too good to miss, or ask for help in a moment of vulnerability. These scams only work if you rush. That’s their weak spot. If something feels high-pressure, pause. Don’t make decisions in panic. You can always ask someone you trust for advice.

Borrowed credibility
Deepfakes hijack your instincts. When someone looks and sounds like your boss, your parent, or a celebrity, your brain wants to trust them. But just because you recognize the face doesn’t mean it’s real. Don’t assume a familiar voice or face is proof of identity. Synthetic media can be convincing enough to fool even close friends.

Trusted platforms become delivery trucks
We tend to relax when something comes through a trusted source — like a Zoom call, a blue-check account, or an ad on a mainstream site. But scammers exploit that trust. Just because something shows up on a legitimate platform doesn’t mean it’s safe. The platform’s credibility rubs off on the fake.

Deepfakes aren’t just a technology problem, they’re a human one. For most of history, our eyes and ears were reliable lie detectors. Now, that shortcut is broken. And until our instincts catch up, skepticism is your best defense.

How to stay one step ahead

  1. Slow the game down.
    Scammers rely on speed. Hang up, close the tab, take a breath. If it’s real, it’ll still be there in five minutes.
  2. Verify on a second channel.
    If your “bank” or “boss” calls, reach out using a number or app you already trust. Don’t rely on the contact info they provide.
  3. Lock down big moves.
    Use two-factor authentication, passphrases, or code words for any important accounts or transactions.
  4. Educate your circle.
    Most deepfake losses happen because someone else panicked. Talk to your family, especially seniors. Share this newsletter. Report fake ads. Keep each other sharp.

Many of these scams fall apart the moment you stop and think. The goal is always the same: get you to act fast. But you don’t have to play along.

Stay calm. Stay sharp. Stay skeptical.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Spilling the Tea: KYC Is a liability, not a safety feature

Published today 8:12
– By Naomi Brockwell
5 minute read

This week, a devastating breach exposed tens of thousands of users of Tea, a dating safety app that asked women to verify their identity with selfies, government IDs, and location data.

Over 72,000 images were found in a publicly accessible Firebase database. No authentication required. 4chan users discovered the open bucket and immediately began downloading and sharing the contents: face scans, driver’s licenses, and private messages. Some users have already used the leaked IP addresses to build and circulate maps that attempt to track and trace the women in those files.

Tea confirmed the breach, claiming the data came from a legacy system. But that doesn’t change the core issue:
This data never should have been collected in the first place.

What’s marketed as safety often doubles as surveillance

Tea is just one example of a broader trend: platforms claiming to protect you while quietly collecting as much data as possible. “Verification” is marketed as a security feature, something you do for your own good. The app was pitched as a tool to help women vet potential dates, avoid abuse, and stay safe. But in practice, access required handing over deeply personal data. Face scans, government-issued IDs, and real-time location information became the price of entry.

This is how surveillance becomes palatable. The language of “just for verification” hides the reality. Users are given no transparency about where their data is stored, how long it is kept, or who can access it. These aren’t neutral design choices. They are calculated decisions that prioritize corporate protection, not user safety.

We need to talk about KYC

What happened with Tea reflects a much bigger issue. Identification is quietly becoming the default requirement for access to the internet. No ID? No entry. No selfie? No account. This is how KYC culture has expanded, moving far beyond finance into social platforms, community forums, and dating apps.

We’ve been taught to believe that identity verification equals safety. But time and again, that promise falls apart. Centralized databases get breached, IP addresses are logged and weaponized, and photos meant for internal review end up archived on the dark web.

If we want a safer internet, we need to stop equating surveillance with security. The real path to safety is minimizing what gets collected in the first place. That means embracing pseudonyms, decentralizing data, and building systems that do not rely on a single gatekeeper to decide who gets to participate.

“Your data will be deleted”. Yeah right.

Tea’s privacy policy stated in black and white:

Selfies and government ID images “will be deleted immediately following the completion of the verification process”.

Yet here we are. Over 72,000 images are now circulating online, scraped from an open Firebase bucket. That’s a direct contradiction of what users were told. And it’s not an isolated incident.

This kind of betrayal is becoming disturbingly common. Companies collect high-risk personal data and reassure users with vague promises:

“We only keep it temporarily”.
“We delete it right after verification”.
“It’s stored securely”.

These phrases are repeated often, to make us feel better about handing over our most private information. But there’s rarely any oversight, and almost never any enforcement.

At TSA checkpoints in the U.S., travelers are now being asked to scan their faces. The official line? The images are immediately deleted. But again, how do we know? Who verifies that? The public isn’t given access to the systems handling those scans. There’s no independent audit, no transparency, and we’re asked to trust blindly.

The truth is, we usually don’t know where our data goes. “Just for verification” has become an excuse for massive data collection. And even if a company intends to delete your data, it still exists long enough to be copied, leaked, or stolen.

Temporary storage is still storage.

This breach shows how fragile those assurances really are. Tea said the right things on paper, but in practice, their database was completely unprotected. That’s the reality behind most “privacy policies”: vague assurances, no independent oversight, and no consequences when those promises are broken.

KYC pipelines are a perfect storm of risk. They collect extremely sensitive data. They normalize giving it away. And they operate behind a curtain of unverifiable claims.

It’s time to stop accepting “don’t worry, it’s deleted” as a substitute for actual security. If your platform requires storing sensitive personal data, that data becomes a liability the moment it is collected.

The safest database is the one that never existed.

A delicate cultural moment

This story has touched a nerve. Tea was already controversial, with critics arguing it enabled anonymous accusations and blurred the line between caution and public shaming. Some see the breach as ironic, even deserved.

But that is not the lesson we should take from this.

The breach revealed how easily identity exposure has become normalized, how vulnerable we all are when ID verification is treated as the default, and how quickly sensitive data becomes ammunition once it slips out of the hands of those who collected it.

It’s a reminder that we are all vulnerable in a world that demands ID verification just to participate in daily life.

This isn’t just about one app’s failure. It’s a reflection of the dangerous norms we’ve accepted.

Takeaways

  • KYC is a liability, not a security measure. The more personal data a platform holds, the more dangerous a breach becomes.
  • Normalizing ID collection puts people at risk. The existence of a database is always a risk, no matter how noble the intent.
  • We can support victims of surveillance without endorsing every platform they use. Privacy isn’t conditional on whether we like someone or not.
  • It’s time to build tools that don’t require identity. True safety comes from architectures that protect by design.

Let this be a wake-up call. Not just for the companies building these tools, but for all of us using them. Think twice before handing over your ID or revealing your IP address to a platform you use.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Zuckerberg: Skipping AI glasses puts you at a “cognitive disadvantage”

The future of AI

Published yesterday 13:41
– By Editorial Staff
"The ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you", believes the Meta CEO.
2 minute read

Meta CEO Mark Zuckerberg warns that people without AI glasses will find themselves at a significant mental “disadvantage” in the future. During the company’s quarterly report, he shared his vision of glasses as the primary way to interact with artificial intelligence.

On Thursday, Meta released its quarterly report. In a call directed at investors, CEO Mark Zuckerberg spoke about the company’s investment in smart glasses and warned about the consequences of staying outside this development, reports TechCrunch.

I continue to think that glasses are basically going to be the ideal form factor for AI, because you can let an AI see what you see throughout the day, hear what you hear, and talk to you, Zuckerberg said during the investor call.

By adding screens, even more value can be unlocked, he argued, whether it involves holographic fields of vision or smaller displays in everyday AI glasses.

I think in the future, if you don’t have glasses that have AI – or some way to interact with AI – I think you’re … probably going to be at a pretty significant cognitive disadvantage compared to other people, he added.

Unexpected success

Meta has focused on “smart” glasses like the Ray-Ban Meta and Oakley Meta models. The glasses allow users to listen to music, take photos and ask questions to Meta AI. The products have become a surprising success – revenue from Ray-Ban Meta glasses more than tripled compared to the previous year.

However, the Reality Labs division has been costly. Meta reported $4,53 billion in operating losses for the second quarter, and since 2020, the unit has lost nearly $70 billion.

Competition is growing. OpenAI acquired Jony Ive’s startup company this spring for $6.5 billion to develop AI devices, while other companies are exploring AI brooches and pendants.

However, Zuckerberg is convinced about the future of glasses and connects them to the Metaverse vision.

The other thing that’s awesome about glasses is they are going to be the ideal way to blend the physical and digital worlds together, he concluded.

Meta has previously been known for contributing to the increasing surveillance society and has also ignored health aspects regarding radiation from wireless technology.

Samsung and Tesla sign billion-dollar deal for AI chip manufacturing

The future of AI

Published 31 July 2025
– By Editorial Staff
Image of the construction of Samsung's large chip factory in Taylor, located in Texas, USA.
2 minute read

South Korean tech giant Samsung has entered into a comprehensive agreement with Tesla to manufacture next-generation AI chips. The contract, which extends until 2033, is worth $16.5 billion and means Samsung will dedicate its new Texas-based factory to producing Tesla’s AI6 chips.

Samsung receives a significant boost for its semiconductor manufacturing through the new partnership with Tesla. The electric vehicle manufacturer has chosen to place production of its advanced AI6 chips at Samsung’s facility in Texas, in a move that could change competitive dynamics within the semiconductor industry, writes TechCrunch.

The strategic importance of this is hard to overstate, wrote Tesla founder Elon Musk on X when the deal was announced.

The agreement represents an important milestone for Samsung, which has previously struggled to attract and retain major customers for its chip manufacturing. According to Musk, Tesla may end up spending significantly more than the original $16.5 billion on Samsung chips.

Actual output is likely to be several times higher, he explained in a later post.

Tesla’s chip strategy takes shape

The AI6 chips form the core of Tesla’s ambition to evolve from car manufacturer to an AI and robotics company. The new generation chip is designed as an all-around solution that can be used both for the company’s Full Self-Driving system and for the humanoid robots of the Optimus model that Tesla is developing, as well as for high-performance AI training in data centers.

Tesla is working in parallel with Taiwanese chip manufacturer TSMC for production of AI5 chips, whose design was recently completed. These will initially be manufactured at TSMC’s facility in Taiwan and later also in Arizona. Samsung already produces Tesla’s AI4 chips.

Since 2019, Tesla has developed its own custom chips after leaving Nvidia’s Drive platform. The first self-developed chipset, known as FSD Computer or Hardware 3, was launched the same year and installed in all of the company’s electric vehicles.

Musk promises personal involvement

In an unusual turn, Samsung has agreed to let Tesla assist in maximizing manufacturing efficiency at the Texas factory. Musk has promised personal presence to accelerate progress.

This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house, he wrote.

The strategic partnership could give Samsung the stable customer volume the company needs to compete with industry leader TSMC, while Tesla secures access to advanced chip manufacturing for its growing AI ambitions.

Women’s app hacked – thousands of private images leaked

Published 29 July 2025
– By Editorial Staff
1 minute read

An app that helps women identify problematic men became a target for hackers. Over 70,000 images, including selfies and driver’s licenses, were leaked to 4chan.

The dating app Tea, which allows women to warn each other about “red flags” in men, suffered a major data breach last week. According to 404 Media, hackers from the 4chan forum managed to access 72,000 images from the app’s database, of which 13,000 were selfies and driver’s license photos.

The app was created by software developer Sean Cook, inspired by his mother’s “terrifying” dating experiences. Tea has over four million active users and topped Apple’s App Store last week.

Careless data handling

The company stored sensitive user data on Google’s cloud service Firebase, where the information became accessible to unauthorized parties. Several cybersecurity experts have criticized the company’s methods as “careless”.

— A company should never host users’ private data on a publicly accessible server, says Grant Ho, professor at the University of Chicago, to The Verge.

Andrew Guthrie Ferguson, law professor at George Washington University, warns that digital “whisper networks” lose control over sensitive information.

— What changes when it’s digital and recoverable and save-able and searchable is you lose control over it, he says.

Tea has launched an investigation together with external cybersecurity companies.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.