Thursday, June 26, 2025

Polaris of Enlightenment

AI expected to boost Swedish economy – and leave many jobless

Published 15 April 2024
– By Editorial Staff
A wide variety of practical tasks can now be performed by AI-controlled robotic workers.
2 minute read

Artificial intelligence could soon add up to SEK 550 billion to Sweden’s GDP, according to a report commissioned by Google.

At the same time, a number of professions are expected to be automated and “disappear” as a result of AI developments – including translators, service workers and salespeople. It is also unclear who will reap the benefits.

According to the report, which was commissioned by Google and prepared by the consulting firm Implement, so-called “generative AI” could increase Sweden’s annual GDP by about 9 percent – or between 500 and 550 billion kronor – over the next decade.

Robot workers in factories and industries are already a reality in many places. The next major step in the automation of society is expected to affect mainly office services, and it is in areas such as information, communication, finance, insurance, business services, education and health care that artificial intelligence is expected to have the greatest “productivity gains” in the future.

Generative AI is a type of AI that can create digital content such as text, images, music or movies by “learning” from large amounts of data and generating new unique content similar to what it has been trained on.

Optimize or replace

– Here, generative AI tools are more likely to complement what humans do. It’s only a small part of the jobs that will be significantly affected by automation, Anna Wikland, Google’s head of Sweden, claims, according to the tax-funded SVT.

Six percent of jobs are expected to be replaced on a large scale by AI in the future – including call center staff, office support functions, technicians, salespeople, service personnel and translators, and many other professions are expected to be affected in various ways by the development of the technology.

At the same time, those who work in construction, cleaning, cooking or caregiving need not be particularly concerned, as it is considered unlikely that AI will have any significant impact at all.

“Need for discussion”

US authorities have previously warned that “AI can increase income inequality if it is used to replace people in low-income jobs while strengthening people in high-income jobs”, and there are concerns that big companies will reap the profits from AI developments – while ordinary people see their jobs disappear forever.

– At the beginning of industrialization, it seemed that all productivity gains were almost exclusively in the hands of corporations, but then there was a political moment when it was reversed and redistributed in different ways across society, says Nicklas Lundblad, a member of the Swedish government’s AI commission.

– Everyone has to take a stand. There is no need for it to go one way or the other. We need a discussion as a society, he continues.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Tech giants’ executives become US military officers – gain power over future warfare

The future of AI

Published today 15:45
– By Editorial Staff
Data from platforms such as Facebook, Instagram, and WhatsApp could soon be linked to the Swedish military's surveillance systems, according to TBOT (the Swedish Armed Forces' Cyber Defense Unit).
3 minute read

Four senior executives from tech giants Meta, Palantir, and OpenAI have recently been sworn into the US Army Reserve with the rank of lieutenant colonel – an officer rank that normally requires over 20 years of active military service.

The group is part of a new initiative called Detachment 201, aimed at transforming the American military by integrating advanced technologies such as drones, robotics, augmented reality (AR), and AI support.

The new recruits are:

Shyam Sankar, Chief Technology Officer (CTO) of Palantir

Andrew Bosworth, Chief Technology Officer of Meta

Kevin Weil, Chief Product Officer (CPO) of OpenAI

Bob McGrew, former Research Director at OpenAI

According to the technology platform Take Back Our Tech (TBOT), which monitors these developments, these are not symbolic appointments.

“These aren’t random picks. They’re intentional and bring representation and collaboration from the highest level of these companies”, writes founder Hakeem Anwar.

Meta and Palantir on the battlefield

Although the newly appointed officers must formally undergo physical training and weapons instruction, they are expected to participate primarily in digital defense. Their mission is to help the army adapt to a new form of warfare where technology takes center stage.

“The battlefield is truly transforming and so is the government”, notes Anwar.

According to Anwar, the recruitment of Palantir’s CTO could mean the military will start using the company’s Gotham platform as standard. Gotham is a digital interface that collects intelligence and monitors targets through satellite imagery and video feeds.

Meta’s CTO is expected to contribute to integrating data from platforms like Facebook, Instagram, and WhatsApp, which according to TBOT could be connected to military surveillance systems. These platforms are used by billions of people worldwide and contain vast amounts of movement, communication, and behavioral data.

“The activities, movements, and communications from these apps could be integrated into this surveillance network”, writes Anwar, adding:

“It’s no wonder why countries opposed to the US like China have been banning Meta products”.

Leaked project reveals AI initiative for entire government apparatus

Regarding OpenAI’s role, Anwar suggests that Kevin Weil and Bob McGrew might design an AI interface for the army, where soldiers would have access to AI chatbots to support strategy and field tactics.

As Detachment 201 becomes public, a separate AI initiative within the US government has leaked. The website ai.gov, still under development, reveals a plan to equip the entire federal administration with AI tools – from code assistants to AI chatbots for internal use.

TBOT notes that the initiative relies on AI models from OpenAI, Google, and Anthropic. The project is led by the General Services Administration, under former Tesla engineer Thomas Shedd, who has also been involved in the cryptocurrency project DOGE.

“The irony? The website itself was leaked during development, demonstrating that AI isn’t foolproof and can’t replace human expertise”, comments Anwar.

According to the tech site’s founder, several federal employees are critical of the initiative, concerned about insufficient safeguards.

“Without proper safeguards, diving head first into AI could create new security vulnerabilities, disrupt operations, and further erode privacy”, he writes.

Deepfakes are getting scary good

Why your next “urgent” call or ad might be an AI scam.

Published 22 June 2025
– By Naomi Brockwell
4 minute read

This week I watched a compilation of video clips that looked absolutely real. Police officers, bank managers, disaster relief workers, product endorsements… but every single one was generated by AI. None of the people, voices, or backdrops ever existed.

It’s fascinating… and chilling. Because the potential for misuse is growing fast, and most people aren’t ready.

This same technology already works in real time. Someone can join a Zoom call, flip a switch, and suddenly look and sound like your boss, your spouse, or your favorite celebrity. That opens the door to a new generation of scams, and people everywhere are falling for them.

The old scripts, supercharged by AI

“Ma’am, I’m seeing multiple threats on your computer. I need remote access right now to secure your files”.
Tech-support scams used to rely on a shaky phone line and a thick accent. Now an AI voice clone mimics a calm AppleCare rep, shares a fake malware alert, and convinces you to install remote-control software. One click later, they’re digging through your files and draining your bank account.

“We’ve detected suspicious activity on your account. Please verify your login”.
Phishing emails are old news. But now people are getting FaceTime calls that look like their bank manager. The cloned face reads off the last four digits of your card, then asks you to confirm the rest. That’s all they need.

“Miracle hearing aids are only $14.99 today. Click to order”.
Fake doctors in lab coats (generated by AI) are popping up in ads, selling junk gadgets. The product either never arrives, or the site skims your card info.

“We just need your Medicare number to update your benefits for this year.”
Seniors are being targeted with robocalls that splice in their grandchild’s real voice. Once the scammer gets your Medicare ID, they start billing for fake procedures that mess up your records.

“Congratulations, you’ve won $1,000,000! Just pay the small claiming fee today”.
Add a fake newscaster to an old lottery scam, and suddenly it feels real. Victims rush to “claim their prize” and wire the fee… straight to a fraudster.

“We’re raising funds for a sick parishioner—can you grab some Apple gift cards?”
Community members are seeing AI-generated videos of their own pastor asking for help. Once the card numbers are sent, they’re gone.

“Can you believe these concert tickets are so cheap?”
AI-generated influencers post about crazy ticket deals. Victims buy, receive a QR code, and show up at the venue, only to find the code has already been used.

“Help our disaster-relief effort.”
Hours after a real hurricane or earthquake, fake charity appeals start circulating. The links look urgent and heartfelt, and route donations to crypto wallets that vanish.

Why we fall for it and what to watch out for

High pressure
Every scammer plays the same four notes: fear, urgency, greed, and empathy. They hit you with a problem that feels like an emergency, offer a reward too good to miss, or ask for help in a moment of vulnerability. These scams only work if you rush. That’s their weak spot. If something feels high-pressure, pause. Don’t make decisions in panic. You can always ask someone you trust for advice.

Borrowed credibility
Deepfakes hijack your instincts. When someone looks and sounds like your boss, your parent, or a celebrity, your brain wants to trust them. But just because you recognize the face doesn’t mean it’s real. Don’t assume a familiar voice or face is proof of identity. Synthetic media can be convincing enough to fool even close friends.

Trusted platforms become delivery trucks
We tend to relax when something comes through a trusted source — like a Zoom call, a blue-check account, or an ad on a mainstream site. But scammers exploit that trust. Just because something shows up on a legitimate platform doesn’t mean it’s safe. The platform’s credibility rubs off on the fake.

Deepfakes aren’t just a technology problem, they’re a human one. For most of history, our eyes and ears were reliable lie detectors. Now, that shortcut is broken. And until our instincts catch up, skepticism is your best defense.

How to stay one step ahead

  1. Slow the game down.
    Scammers rely on speed. Hang up, close the tab, take a breath. If it’s real, it’ll still be there in five minutes.
  2. Verify on a second channel.
    If your “bank” or “boss” calls, reach out using a number or app you already trust. Don’t rely on the contact info they provide.
  3. Lock down big moves.
    Use two-factor authentication, passphrases, or code words for any important accounts or transactions.
  4. Educate your circle.
    Most deepfake losses happen because someone else panicked. Talk to your family, especially seniors. Share this newsletter. Report fake ads. Keep each other sharp.

Many of these scams fall apart the moment you stop and think. The goal is always the same: get you to act fast. But you don’t have to play along.

Stay calm. Stay sharp. Stay skeptical.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Tech company bankrupt – “advanced AI” was 700 Indians

Published 14 June 2025
– By Editorial Staff
“AI washing” refers to a company exaggerating or lying about their products or services being powered by advanced artificial intelligence in order to attract investors and customers.
2 minute read

An AI company that marketed itself as a technological pioneer – and attracted investments from Microsoft, among others – has gone bankrupt. In the aftermath, it has been revealed that the technology was largely based on human labor, despite promises of advanced artificial intelligence.

Builder.ai, a British startup formerly known as Engineer.ai, claimed that their AI assistant Natasha could build apps as easily as ordering pizza. But as early as 2019, the Wall Street Journal revealed that much of the coding was actually done manually by a total of about 700 programmers in India.

Despite the allegations, Builder.ai secured over $450 million in funding from investors such as Microsoft, Qatar Investment Authority, IFC, and SoftBank’s DeepCore. At its peak, the company was valued at $1.5 billion.

In May 2025, founder and CEO Sachin Dev Duggal stepped down from his position, and when the new management took over, it emerged that the revelations made in 2019 were only the tip of the iceberg. For example, the company had reported revenues of $220 million in 2024, while the actual figures were $55 million. Furthermore, the company is suspected of inflating the figures through circular transactions and fake sales via “third-party resellers”, reports the Financial Times.

Following the new revelations, lenders froze the company’s account, forcing Builder.ai into bankruptcy. The company is now accused of so-called AI washing, which means that a company exaggerates or falsely claims that its products or services are powered by advanced artificial intelligence in order to attract investors and customers.

The company’s heavy promotion of “Natasha” as a revolutionary AI solution turned out to be a facade – behind the deceptive marketing ploy lay traditional, human-driven work and financial irregularities.

OpenAI now keeps your ChatGPT logs… Even if you delete them

Why trusting companies isn’t enough—and what you can do instead.

Published 14 June 2025
– By Naomi Brockwell
5 minute read

This week, we learned something disturbing: OpenAI is now being forced to retain all ChatGPT logs, even the ones users deliberately delete.

That includes:

  • Manually deleted conversations
  • “Temporary Chat” sessions that were never supposed to persist
  • Confidential business data passed through OpenAI’s API

The reason? A court order.

The New York Times and other media companies are suing OpenAI over alleged copyright infringement. As part of the lawsuit, they speculated that people might be using ChatGPT to bypass paywalls, and deleting their chats to cover their tracks. Based on that speculation alone, a judge issued a sweeping preservation order forcing OpenAI to retain every output log going forward.

Even OpenAI doesn’t know how long they’ll be required to keep this data.

This is bigger than just one court case

Let’s be clear: OpenAI is not a privacy tool. They collect a vast amount of user data, and everything you type is tied to your real-world identity. (They don’t even allow VoIP numbers at signup, only real mobile numbers.) OpenAI is a fantastic tool for productivity, coding, research, and brainstorming. But it is not a place to store your secrets.

That said, credit where it’s due: OpenAI is pushing back. They’ve challenged the court order, arguing it undermines user privacy, violates global norms, and forces them to retain sensitive data users explicitly asked to delete.

And they’re right to fight it.

If a company promises, “We won’t keep this”, and users act on that promise, they should be able to trust it. When that promise is quietly overridden by a legal mandate—and users only find out months later—it destroys the trust we rely on to function in a digital society.

Why this should scare you

This isn’t about sneaky opt-ins or buried fine print. It’s about people making deliberate choices to delete sensitive data—and those deletions being ignored.

That’s the real problem: the nullification of your right to delete.

Private thoughts. Business strategy. Health questions. Intimate disclosures. These are now being held under legal lock, despite clear user intent for them to be erased.

When a platform offers a “Delete” button or advertises “Temporary Chat”, the public expectation is clear: that information will not persist.

But in a system built for compliance, not consent, those expectations don’t matter.

I wish this weren’t the case

I want to live in a world where:

  • You can go to the doctor and trust that your medical records won’t be subpoenaed
  • You can talk to a lawyer without fearing your conversations could become public
  • Companies that want to protect your privacy aren’t forced to become surveillance warehouses

But we don’t live in that world.

We live in a world where:

  • Prosecutors can compel companies to hand over privileged legal communications (just ask Roger Ver’s lawyers)
  • Government entities can override privacy policies, without user consent or notification
  • “Delete” no longer means delete

This isn’t privacy. It’s panopticon compliance.

So what can you do?

You can’t change the court order.
But you can stop feeding the machine.

Here’s how to protect yourself:

1. Be careful what you share

When logged onto centralized tools like ChatGPT, Claude, or Perplexity, your activities are stored and linked to a single identity across sessions. That makes your full history a treasure trove of data.

You can still use these tools for light, non-sensitive tasks, but be careful not to share:

  • Sensitive information
  • Legal or business strategies
  • Financial details
  • Anything that could harm you if leaked

These tools are great for brainstorming and productivity, but not for contracts, confessions, or client files.

2. Use privacy-respecting platforms (with caution)

If you want to use AI tools with stronger privacy protections, here are two promising options:
(there are many more, let us know in the comments about your favorites)

Brave’s Leo

  • Uses reverse proxies to strip IP addresses
  • Promises zero logging of queries
  • Supports local model integration so your data never leaves your device
  • Still requires trust in Brave’s infrastructure

Venice.ai

  • No account required
  • Strips IP addresses and doesn’t link sessions together
  • Uses a decentralized GPU marketplace to process your queries
  • Important caveat: Venice is just a frontend—the compute providers running your prompts can see what you input. Venice can’t enforce logging policies on backend providers.
  • Because it’s decentralized, at least no single provider can build a profile of you across sessions

In short: I trust Brave with more data, because privacy is central to their mission. And I trust Venice’s promise not to log data, but am hesitant about trusting faceless GPU providers to adhere to the same no-logging policies. But as a confidence booster, Venice’s decentralized model means even those processing your queries can’t see the full picture, which is a powerful safeguard in itself. So both options above are good for different purposes.

3. Run AI locally for maximum privacy

This is the gold standard.

When you run an AI model locally, your data never leaves your machine. No cloud. No logs.

Tools like Ollama, paired with OpenWebUI, let you easily run powerful open-source models on your own device.

We published a complete guide for getting started—even if you’re not technical.

The real battle: Your right to privacy

This isn’t just about one lawsuit or one company.

It’s about whether privacy means anything in the digital age.

AI tools are rapidly becoming our therapists, doctors, legal advisors, and confidants. They know what we eat, what we’re worried about, what we dream of, and what we fear. That kind of relationship demands confidentiality.

And yet, here we are, watching that expectation collapse under the weight of compliance.

If courts can force companies to preserve deleted chats indefinitely, then deletion becomes a lie. Consent becomes meaningless. And companies become surveillance hubs for whoever yells loudest in court.

The Fourth Amendment was supposed to stop this. It says a warrant is required before private data can be seized. But courts are now sidestepping that by ordering companies to keep everything in advance—just in case.

We should be fighting to reclaim that right. Not normalizing its erosion.

Final Thoughts

We are in a moment of profound transition.

AI is rapidly becoming integrated into our daily lives—not just as a search tool, but as a confidant, advisor, and assistant. That makes the stakes for privacy higher than ever.

If we want a future where privacy survives, we can’t just rely on the courts to protect us. We have to be deliberate about how we engage with technology—and push for tools that respect us by design.

As Erik Voorhees put it: “The only way to respect user privacy is to not keep their data in the first place”.

The good news? That kind of privacy is still possible.
You have options. You can use AI on your terms.

Just remember:

Privacy isn’t about hiding. It’s about control.
About choosing what you share—and with whom.

And right now, the smartest choice might be to share a whole lot less.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.