Monday, June 2, 2025

Polaris of Enlightenment

Warning: Unencrypted texts and calls are vulnerable

Published 4 January 2025
– By Editorial Staff

US security authorities are urging the public to use encrypted messaging apps to protect their digital communications.

The recommendation follows reports of widespread breaches at major telecom companies, where actors are suspected of exploiting security flaws to access sensitive information.

According to Jeff Greene of the US Cybersecurity and Infrastructure Security Agency (CISA), encryption is crucial.

– Encryption is your friend, whether it’s on text messaging or if you have the capacity to use encrypted voice communication. Even if the adversary is able to intercept the data, if it is encrypted, it will make it impossible[to use].

Several experts also warn about the risks of legacy messaging services such as SMS and some solutions based on RCS (Rich Communication Services), a newer standard used for text messaging but where end-to-end encryption is not always active. Apps such as Signal and WhatsApp are promoted as more secure alternatives, but require both parties to use them.

– It is well documented that SMS messages are not encrypted and any non encrypted forms of communication can be surveilled by law enforcement or anyone with the right tools, knowledge and software, said Jake Moore, a cybersecurity expert at ESET, a global company specializing in IT security.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Apple sued over iPhone eavesdropping – users may get payouts

Published today 23:06
– By Editorial Staff
Apple has denied any wrongdoing - but finally agreed to pay $95 million in a settlement.

Apple’s voice assistant Siri was activated without commands and recorded sensitive conversations – recordings that were also allegedly shared with other companies.

Now users in the US can get compensation – even if it’s relatively small amounts.

Technology giant Apple was caught in the crossfire after it was discovered that its voice assistant, Siri, recorded private conversations without users’ knowledge. The company has agreed to pay $95 million in a settlement reached in December last year, following a class action lawsuit alleging privacy violations.

The lawsuit was filed in 2021 by California resident Fumiko Lopez along with other Apple users. They stated that Siri-enabled devices recorded conversations without users first intentionally activating the voice assistant by saying “Hello, Siri” or pressing the side button.

According to the allegations, the recordings were not only used to improve Siri, but were also shared with third-party contractors and other actors – without users’ consent. It is also alleged that the information was used for targeted advertising, in violation of both US privacy laws and Apple’s own privacy policy.

However, Apple has consistently denied the allegations and claims that its actions were neither “wrong nor illegal”. However, paying such a large sum to avoid further litigation has raised questions about what may have been hidden from the public.

Users can claim compensation

Individuals who owned a Siri-enabled Apple product – such as an iPhone, iPad, Apple Watch, MacBook, iMac, HomePod, iPod touch or Apple TV – between September 17, 2014 and December 31, 2024, and who live in the United States or a U.S. territory, may now be entitled to compensation.

However, to qualify, one must certify that Siri was inadvertently activated during a call that was intended to be private or confidential.

The reimbursement applies to up to five devices, with a cap of $20 per device – totaling up to $100 per person. The exact amount per user will be determined once all claims have been processed.

Applications must be submitted by July 2, 2025, and those eligible may have already received an email or physical letter with an identification code and confirmation code. Those who haven’t received anything but still think they qualify can instead apply for reimbursement via the settlement’s website – if you provide the model and serial number of your devices.

How to protect yourself from future interception

Users who want to strengthen their privacy can limit Siri’s access themselves in the settings:

  • Turn off Improve Siri: Go to Settings > Privacy & Security > Analytics & Improvements and disable Improve Siri & Dictation.
  • Delete Siri history: Go to Settings > Siri > Siri & Dictation History and select Delete Siri & Dictation History.
  • Turn off Siri completely: Go to Settings > Siri > Listen for “Hey Siri”, turn it off, then go to Settings > General > Keyboard and disable Enable Dictation.

Apple describes more privacy settings on its website, such as how to restrict Siri’s access to location data or third-party apps. But in the wake of the scandal, critics say that you shouldn’t blindly trust companies’ promises of data protection – and that the only way to truly protect your privacy is to take matters into your own hands.

KYC is the crime

The Coinbase hack shows how state-mandated surveillance is putting lives at risk.

Published yesterday 7:36
– By Naomi Brockwell

Last week, Coinbase got hacked.

Hackers demanded a $20 million ransom after breaching a third-party system. They didn’t get passwords or crypto keys. But what they did get will put lives at risk:

  • Names
  • Home addresses
  • Phone numbers
  • Partial Social Security numbers
  • Identity documents
  • Bank info

That’s everything someone needs to impersonate you, blackmail you, or show up at your front door.

This isn’t hypothetical. There’s a growing wave of kidnappings and extortion targeting people with crypto exposure. Criminals are using leaked identity data to find victims and hold them hostage.

Let’s be clear: KYC doesn’t just put your data at risk. It puts people at risk.

Naturally, people are furious at any company that leaks their information.

But here’s the bigger issue:
No system is unhackable.
Every major institution, from the IRS to the State Department, has suffered breaches.
Protecting sensitive data at scale is nearly impossible.

And Coinbase didn’t want to collect this data.
Many companies don’t. It’s a massive liability.
They’re forced to, by law.

A new, dangerous normal

KYC, Know Your Customer, has become just another box to check.

Open a bank account? Upload your ID.
Use a crypto exchange? Add your selfie and utility bill.
Sign up for a payment app? Same thing.

But it wasn’t always this way.

Until the 1970s, you could walk into a bank with cash and open an account. Your financial life was private by default.

That changed with the Bank Secrecy Act of 1970, which required banks to start collecting and reporting customer activity to the government. Still, KYC wasn’t yet formalized. Each bank decided how well they needed to know someone. If you’d been a customer since childhood, or had a family member vouch for you, that was often enough.

Then came the Patriot Act, which turned KYC into law. It required every financial institution to collect, verify, and store identity documents from every customer, not just for large or suspicious transactions, but for basic access to the financial system.

From that point on, privacy wasn’t the default. It was erased.

The real-world cost

Today, everyone is surveilled all the time.
We’ve built an identity dragnet, and people are being hurt because of it.

Criminals use leaked KYC data to find and target people, and it’s not just millionaires. It’s regular people, and sometimes their parents, partners, or even children.

It’s happened in London, Buenos Aires, Dubai, Lagos, Los Angeles, all over the world.
Some are robbed. Some are held for ransom.
Some don’t survive.

These aren’t edge cases. They’re the direct result of forcing companies to collect and store sensitive personal data.

When we force companies to hoard identity data, we guarantee it will eventually fall into the wrong hands.

There are two types of companies, those that have been hacked, and those that don’t yet know they’ve been hacked” – former Cisco CEO, John Chambers

What KYC actually does

KYC turns every financial institution into a surveillance node.
It turns your personal information into a liability.

It doesn’t just increase risk — It creates it.

KYC is part of a global surveillance infrastructure. It feeds into databases governments share and query without your knowledge. It creates chokepoints where access to basic services depends on surrendering your privacy. And it deputizes companies to collect and hold sensitive data they never wanted.

If you’re trying to rob a vault, you go where the gold is.
If you’re trying to target people, you go where the data lives.

KYC creates those vaults, legally mandated, poorly secured, and irresistible to attackers.

Does it even work?

We’re told KYC is necessary to stop terrorism and money laundering.

But the top reasons banks file “suspicious activity reports” are banal, like someone withdrawing “too much” of their own money.

We’re told to accept this surveillance because it might stop a bad actor someday.

In practice, it does more to expose innocent people than to catch criminals.

KYC doesn’t prevent crime.
It creates the conditions for it.

A Better Path Exists

We don’t have to live like this.

Better tools already exist, tools that allow verification without surveillance:

  • Zero-Knowledge Proofs (ZKPs): Prove something (like your age or citizenship) without revealing documents
  • Decentralized Identity (DID): You control what gets shared, and with whom
  • Homomorphic Encryption: Allows platforms to verify encrypted data without ever seeing it

But maybe it’s time to question something deeper.
Why is centralized, government-mandated identity collection the foundation of participation in financial life?

This surveillance regime didn’t always exist. It was built.

And just because it’s now common doesn’t mean we should accept it.

We didn’t need it before. We don’t need it now.

It’s time to stop normalizing mass surveillance as a condition for basic financial access.

The system isn’t protecting us.
It’s putting us in danger.

It’s time to say what no one else will

KYC isn’t a necessary evil.
It’s the original sin of financial surveillance.

It’s not a flaw in the system.
It is the system.

And the system needs to go.

Takeaways

  • Check https://HaveIBeenPwned.com to see how much of your identity is already exposed
  • Say no to services that hoard sensitive data
  • Support better alternatives that treat privacy as a baseline, not an afterthought

Because safety doesn’t come from handing over more information.

It comes from building systems that never need it in the first place.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Wallenberg and Nvidia to build “cutting-edge” AI center in Sweden

Published 27 May 2025
– By Editorial Staff
Marcus Wallenberg believes that the AI initiative is absolutely crucial for Swedish industry.

US chip manufacturer Nvidia is establishing a new AI center in Sweden in collaboration with the Wallenberg sphere and several major Swedish companies, including Astra Zeneca, Ericsson, Saab, and SEB.

The aim of the initiative is to strengthen Sweden’s position in artificial intelligence by building the country’s first AI supercomputer for business. The center will be run by a joint venture formed by the parties involved.

– Investing in cutting-edge AI infrastructure is a crucial step toward accelerating the development and adoption of AI across Swedish industry, said Marcus Wallenberg, chairman of Wallenberg Investments, in a statement, adding:

– We believe this initiative will generate valuable spillover effects – by enabling upskilling, fostering new collaborations, and strengthening the broader national AI ecosystem.

The initiative was announced during a visit to Sweden by Nvidia CEO Jensen Huang. During his visit, he was also awarded an honorary doctorate at Linköping University.

– As electricity powered the industrial age and the Internet fueled the digital age, AI is the engine of the next industrial revolution. Through the visionary initiative of Wallenberg Investments and Sweden’s industry leaders, the country is building its first AI infrastructure – laying the foundation for breakthroughs across science, industry, and society, and securing Sweden’s place at the forefront of the AI era, argues Nvidia’s CEO.

AI-driven drug development

And the players have clear ambitions for the initiative. Pharmaceutical giant AstraZeneca will use the system for AI-driven breakthroughs in drug development, with advanced models and data processing.

Ericsson wants to develop AI models to improve performance, efficiency, and customer experiences, as well as enable new business models.

Saab will invest in AI to accelerate the development of advanced defense capabilities in its leading systems, and SEB will integrate AI for increased productivity, new customer offerings, and long-term competitiveness, with a focus on critical infrastructure.

The ambition with this initiative is to establish a next generation AI compute infrastructure that is both a production facility and at the same time serves as a reference installation, unlocking new possibilities for AI adoption”, the companies said.

One in two young Brits long for a world without the internet

Published 26 May 2025
– By Editorial Staff
In recent years, there have been a number of reports and alerts showing that social media has a very negative impact on young people's mental health.

A new survey from the British Standards Institution mapping young people’s relationship with social media and digital life reveals that nearly half of young Brits wish they had grown up without the internet.

Of the 1,293 respondents aged 16 to 21, 46% said they would prefer to be young in a world where the internet did not exist. Almost 70% said they felt worse and had lower self-esteem after using social media, with 68% saying their time online had been directly detrimental to their mental health.

A quarter of respondents spent four hours or more a day on social media. Of these, 42% admitted to lying to parents or guardians about their online use. The same number said they had lied about their age online at some point, while 40% used so-called “burner” accounts – hidden or alternative profiles. 27% had even pretended to be a completely different person and the same number had shared their location information with strangers online.

The survey was conducted in the aftermath of coronavirus-related restrictions during the lockdown policy, a period which, according to three-quarters of participants, led to a marked increase in screen time.

Experts don’t believe in “digital curfew”

Against this backdrop, the UK’s technology minister, Peter Kyle, has recently opened the door to introducing mandatory digital “curfews” – i.e. blocking certain apps, such as TikTok and Instagram, after a certain time in the evening. Although critics have dismissed the proposal as repressive, it is gaining some support among young people: half of those surveyed, 50%, said they would support a digital curfew after 22:00.

However, several experts, including Rani Govender, policy manager for children’s online safety at the NSPCC, say the proposed restrictions are not enough.

We need to make clear that a digital curfew alone is not going to protect children from the risks they face online. They will be able to see all these risks at other points of the day and they will still have the same impact

She added that the focus should instead be on making the online environment safer and less addictive for children and young people, and preventing them from visiting obviously harmful sites and apps.

“Rabbit holes of harmful material”

Andy Burrows, CEO of the Molly Rose Foundation, also highlights the need for legislation to protect young people from harmful content:

– It’s clear that young people are aware of the risks online and, what’s more, they want action from tech companies to protect them.

He points out that algorithms often display content that can quickly lead young people into destructive flows and spirals:

– Algorithms can quickly spiral and take young people down rabbit holes of harmful and distressing material through no fault of their own.

Burrows calls for new laws to force a “safe by design” approach that puts the needs of children and society ahead of corporate profits.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.