Monday, October 6, 2025

Polaris of Enlightenment

Sweden’s government endorses EU mass surveillance plan

Mass surveillance

Published 6 April 2023
– By Editorial Staff
Gunnar Strömmer, Sweden's Minister of Jusice.
2 minute read

The EU “chat control” law will force operators of chat and messaging services to monitor all users and scan their messages for suspected child pornography. Despite heavy criticism of the proposal, which has been likened to how totalitarian states such as China control their citizens, the Swedish government has announced its full support.

– This is an important EU initiative against a very serious crime, said Justice Minister Gunnar Strömmer.

Strömmer argues that it is reasonable to monitor all citizens’ digital communications because “online child sexual abuse must be combated and the abuse of digital grooming services effectively prevented”. He further argues that the proposal contains “significant trade-offs between privacy and the importance of effective law enforcement”.

However, not everyone is quite as enthusiastic. IT security specialist Karl Emil Nikka, for example, has called the proposal “the biggest mass surveillance on this side of the Great Wall of China”.

– Every single European citizen’s every call, every text message, every picture they send will be monitored at all times. This is not reasonable, said Niels Paarup-Petersen, the Center Party’s spokesperson on digitalization.

Another critic is Jan Jonsson, CEO of Mullvad VPN, who believes that this kind of totalitarian bill is a major invasion of people’s privacy and can only be compared to totalitarian China.

When the slippery slope of what is being mass monitored shifts, when we can only guess who is monitoring our communications and with what agenda, then we will change our behavior accordingly, and the democratic functions of a society are eroded“, he notes.

He also points out that advocates have only talked about the importance of protecting children – not about the effects of this type of totalitarian surveillance on society and how their fundamental rights are affected.

These kinds of AI systems are very blunt and will filter out family vacation photos from the beach, video calls with online doctors, intimate text messages between partners, and conversations from dating apps. Journalists who protect their anonymous sources, for example, should be worried“.

But the biggest risk, he says, is that people will start to self-censor even when communicating with friends out of fear, as they become aware that what they say or write is being monitored.

Other critics have expressed concern that while the mass surveillance of their own citizens is now being launched on the grounds of fighting pedophiles, the technology may soon be used to map dissidents and critics of various regimes.

Chat Control (officially the Regulation of the European Parliament and of the Council laying down rules to prevent and combat sexual abuse of children) is a legislative proposal at EU level that would require operators providing chat and messaging services to also scan all communications for child pornography and submit the findings to the police.

The proposal is justified by the need to protect children more effectively against sexual abuse also in digital environments. Critics argue that the proposal is both ineffective and invasive of privacy as it effectively means that citizens' private emails and messages are searched for sexual abuse material.

Critics also argue that the same technology could be used in the future to register political dissidents or people expressing certain specific views. The proposal has therefore been described as incompatible with human rights, with parallels drawn in particular to China's extensive mass surveillance of the population.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Microsoft stops Israel’s use of technology for mass surveillance of Palestinians

The genocide in Gaza

Published 27 September 2025
– By Editorial Staff
Microsoft's research and development division in Matam Business Park in Haifa, Israel.
5 minute read

The tech giant has shut down the Israeli military’s access to cloud services and AI tools following revelations about a secret spy project that collected millions of phone calls from Palestinian civilians.

Microsoft has shut down the Israeli military’s access to technology that was used to power an extensive surveillance system that collected millions of Palestinian civilian phone calls daily from Gaza and the West Bank, The Guardian can reveal.

Microsoft informed Israeli officials last week that Unit 8200, the military’s elite intelligence agency, had violated the company’s terms of service by storing the enormous amount of surveillance data on its Azure cloud platform, according to sources with insight into the situation.

The decision to cut off Unit 8200’s ability to use parts of the technology is a direct result of an investigation that The Guardian published last month. It revealed how Azure was used to store and process the enormous amount of Palestinian communications in a mass surveillance program.

Secret project after summit meeting

In a joint investigation with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language newspaper Local Call, The Guardian revealed how Microsoft and Unit 8200 had worked together on a plan to move large volumes of sensitive intelligence material to Azure.

The project began after a 2021 meeting between Microsoft CEO Satya Nadella and the unit’s then-commander Yossi Sariel.

In response to the investigation, Microsoft ordered an urgent external review to examine its relationship with Unit 8200. The initial results have now led to the company cutting off the unit’s access to certain of its cloud storage and AI services.

Equipped with Azure’s virtually unlimited storage capacity and computing power, Unit 8200 had built an indiscriminate new system that allowed its intelligence officers to collect, replay, and analyze the content of mobile calls from an entire population.

The project was so extensive that, according to sources from Unit 8200 – which is equivalent to the US National Security Agency – an internal motto emerged that captured its scope and ambition: “One million calls per hour.”

According to several sources, the enormous archive of intercepted calls – amounting to as much as 8,000 terabytes of data – was held in a Microsoft data center in the Netherlands. Within days of The Guardian publishing the investigation, Unit 8200 appears to have quickly moved surveillance data out of the country.

Data moved to Amazon

According to sources with knowledge of the enormous data transfer out of the EU country, it occurred in early August. Intelligence sources said that Unit 8200 planned to transfer data to Amazon Web Services cloud platform. Neither the Israel Defense Forces (IDF) nor Amazon responded to a request for comment.

Microsoft’s extraordinary decision to terminate the spy agency’s access to key technology was taken amid pressure from employees and investors over its work for the Israeli military and the role its technology has played in the nearly two-year-long offensive in Gaza.

A UN commission of inquiry recently concluded that Israel had committed genocide in Gaza, an allegation denied by Israel but supported by many experts in international law.

The Guardian’s joint investigation led to protests at Microsoft’s US headquarters and one of its European data centers, as well as demands from a worker-led campaign group, No Azure for Apartheid, to end all ties to the Israeli military.

Clear message from Microsoft

On Thursday, Microsoft Vice Chairman and President Brad Smith informed staff about the decision. In an email that The Guardian has seen, he said the company had “terminated and deactivated a set of services to a unit within Israel’s Ministry of Defense,” including cloud storage and AI services.

Smith wrote: “We do not provide technology to facilitate mass surveillance of civilians. We have applied this principle in all countries around the world, and we have insisted on it repeatedly for more than two decades.”

The decision brings an abrupt end to a three-year period during which the spy agency operated its surveillance program using Microsoft’s technology.

Unit 8200 used its own extensive surveillance capabilities to intercept and collect the calls. The spy agency then used a customized and segregated area within the Azure platform, enabling data to be retained for longer periods and analyzed with AI-driven techniques.

Used for bombing targets in Gaza

Although the initial focus of the surveillance system was the West Bank, where an estimated 3 million Palestinians live under Israeli military occupation, intelligence sources said the cloud-based storage platform had been used in the Gaza offensive to facilitate the preparation of deadly airstrikes.

The revelations highlighted how Israel has relied on services and infrastructure from major US tech companies to support its bombardment of Gaza, which has killed more than 65,000 Palestinians, mostly civilians, and created a deep humanitarian crisis and famine catastrophe.

According to a document seen by The Guardian, a senior Microsoft executive told Israel’s Ministry of Defense last week:

While our review is ongoing, we have at this point identified evidence supporting parts of The Guardian’s reporting.

The executive told Israeli officials that Microsoft “is not in the business of facilitating mass surveillance of civilians” and informed them that it would “deactivate” access to services supporting Unit 8200’s surveillance project and shut down its use of certain AI products.

First time since the war began

The termination is the first known case of a US tech company withdrawing services provided to the Israeli military since the beginning of its war in Gaza.

The decision has not affected Microsoft’s broader commercial relationship with the IDF, which is a long-standing client and will retain access to other services. The termination will raise questions within Israel about the policy of keeping sensitive military data in a third-party cloud operated abroad.

Last month’s revelations about Unit 8200’s use of Microsoft technology followed an earlier investigation by The Guardian and its partners about the broader relationship between the company and the Israeli military.

That story, published in January and based on leaked files, showed how the IDF’s reliance on Azure and its AI systems increased dramatically in the most intensive phase of its Gaza campaign.

Following that report, Microsoft launched its first review of how the IDF uses its services. It said in May that it had “found no evidence to date” that the military had failed to comply with its terms of service, or used Azure and its AI technology “to target or harm people” in Gaza.

But The Guardian’s investigation with +972 and Local Call published in August, which revealed that the cloud-based surveillance project had been used to investigate and identify bombing targets in Gaza, led to the company reassessing its conclusions.

The revelations caused alarm among senior Microsoft executives and raised concerns that some of its Israel-based employees may not have been fully transparent about their knowledge of how Unit 8200 used Azure when questioned as part of the review.

The company said its executives, including Nadella, were not aware that Unit 8200 planned to use, or ultimately used, Azure to store the content of intercepted Palestinian calls.

Microsoft then launched its second and more targeted review, which was overseen by lawyers at the US firm Covington & Burling. In his note to staff, Smith said the investigation did not have access to any customer data but its findings were based on a review of internal Microsoft documents, emails and messages between personnel.

I want to note our appreciation for The Guardian’s reporting, Smith wrote, noting that it had illuminated “information we could not access given our customer confidentiality commitments.” He added: “Our review is ongoing.”

OpenAI monitors ChatGPT chats – can report users to police

Mass surveillance

Published 20 September 2025
– By Editorial Staff
What has been perceived as private AI conversations can now end up with police.
2 minute read

OpenAI has quietly begun monitoring users’ ChatGPT conversations and can report content to law enforcement authorities.

The revelation comes after incidents where AI chatbots have been linked to self-harm behavior, delusions, hospitalizations and suicide – what experts call “AI psychosis”.

In a blog post, the company acknowledges that they systematically scan users’ messages. When the system detects users planning to harm others, the conversations are directed to a review team that can suspend accounts and contact police.

“If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement”, writes OpenAI.

The new policy means in practice that millions of users have their conversations scanned and that what many perceived as private conversations with an AI are now subject to systematic surveillance where content can be forwarded to authorities.

Tech journalist Noor Al-Sibai at Futurism points out that OpenAI’s statement is “short and vague” and that the company does not specify exactly what types of conversations could lead to police reports.

“It remains unclear which exact types of chats could result in user conversations being flagged for human review, much less getting referred to police”, she writes.

Security problems ignored

Ironically, ChatGPT has proven vulnerable to “jailbreaks” where users have been able to trick the system into giving instructions for building neurotoxins or step-by-step guides for suicide. Instead of addressing these fundamental security flaws, OpenAI is now choosing extensive surveillance of users.

The surveillance stands in sharp contrast to the tech company’s actions in the lawsuit against the New York Times, where the company “steadfastly rejected” demands to hand over ChatGPT logs citing user privacy.

“It’s also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it’s monitoring user chats and potentially sharing them with the fuzz”, Al-Sibai notes.

May be forced to hand over chats

OpenAI CEO Sam Altman has recently acknowledged that ChatGPT does not offer the same confidentiality as conversations with real therapists or lawyers, and due to the lawsuit, the company may be forced to hand over user chats to various courts.

“OpenAI is stuck between a rock and a hard place”, writes Al-Sibai. The company is trying to handle the PR disaster from users who have suffered mental health crises, but since they “clearly having trouble controlling its own tech”, they fall back on “heavy-handed moderation that flies in the face of its own CEO’s promises”.

The tech company announces that they are “currently not” reporting self-harm cases to police, but the wording suggests that even this could change. The company has also not responded to requests to clarify what criteria are used for surveillance.

Wifi signals can identify people with 95 percent accuracy

Mass surveillance

Published 21 August 2025
– By Editorial Staff
2 minute read

Italian researchers have developed a technique that can track and identify individuals by analyzing how wifi signals reflect off human bodies. The method works even when people change clothes and can be used for surveillance.

Researchers at La Sapienza University in Rome have developed a new method for identifying and tracking people using wifi signals. The technique, which the researchers call “WhoFi”, can recognize people with an accuracy rate of up to 95 percent, reports Sweclockers.

The method is based on the fact that wifi signals reflect and refract in different ways when they hit human bodies. By analyzing these reflection patterns using machine learning and artificial neural networks, researchers can create unique “fingerprints” for each individual.

Works despite clothing changes

Experiments show that these digital fingerprints are stable enough to identify people even when they change clothes or carry backpacks. The average recognition rate is 88 percent, which researchers say is comparable to other automatic identification methods.

The research results were published in mid-July and describe how the technology could be used in surveillance contexts. According to the researchers, WhoFi can solve the problem of re-identifying people who were first observed via a surveillance camera in one location and then need to be found in footage from cameras in other locations.

Can be used for surveillance

The technology opens up new possibilities in security surveillance, but simultaneously raises questions about privacy and personal security. The fact that wifi networks, which are ubiquitous in today’s society, can be used to track people without their knowledge represents a new dimension of digital surveillance.

The researchers present their discovery as a breakthrough in the field of automatic person identification, but do not address the ethical implications that the technology may have for individuals’ privacy.

Facebook’s insidious surveillance: VPN app spied on users

Mass surveillance

Published 9 August 2025
– By Editorial Staff
2 minute read

In 2013, Facebook acquired the Israeli company Onavo for approximately 120 million dollars. Onavo was marketed as a VPN app that would protect users’ data, reduce mobile usage, and secure online activities. Over 33 million people downloaded the app believing it would strengthen their privacy.

In reality, Onavo gave Facebook complete insight into users’ phones – including which apps were used, how long they were open, and which websites were visited.

According to court documents and regulatory authorities, Facebook used this data to identify trends and map potential competitors. By analyzing user patterns in apps like Houseparty, YouTube, Amazon, and Snapchat, the company could determine which platforms posed a threat to its market dominance.

When Snapchat’s popularity began to explode in 2016, Facebook encountered a problem: encrypted traffic prevented insight into users’ behavior, reports Business Today. To circumvent this, Facebook launched an internal operation called “Project Ghostbusters”.

Facebook engineers developed specially adapted code based on Onavo’s infrastructure. The app installed a so-called root certificate on users’ phones – consent was hidden in legal documentation – which enabled Facebook to create fake certificates that mimicked Snapchat’s servers.

This made it possible to decrypt and analyze Snapchat’s traffic internally. The purpose was to use the information as a basis for strategic decisions, product development, or potential acquisitions.

Snapchat said no – Facebook copied instead

Based on data from Onavo, Facebook offered to buy Snapchat for 3 billion dollars. When Snapchat CEO Evan Spiegel declined, Facebook responded by launching Instagram Stories – a direct copy of Snapchat’s most popular feature. This became a decisive move in the competition between the two platforms.

In 2018, Apple removed Onavo from the App Store, citing that the app violated the company’s data protection rules. Facebook responded by launching a new app: Facebook Research, internally called Project Atlas, which offered similar surveillance functions. This time, the company paid users – some as young as 13 – up to 20 dollars per month to install the app.

When Apple discovered this, the company acted forcefully and revoked Facebook’s enterprise development certificates. This meant that all internal iOS apps were temporarily stopped – one of Apple’s most far-reaching measures ever.

In 2020, the Australian Competition and Consumer Commission (ACCC) sued Facebook, now called Meta, for misleading users with false promises about privacy. In 2023, Meta’s subsidiaries were fined a total of 20 million Australian dollars (approximately €11 million) for misleading behavior.

Why it still matters

Business Insider emphasizes that the Onavo story is not just about a misleading app. It also illustrates how one of the world’s most powerful tech companies built a surveillance system disguised as a privacy tool.

The fact that Facebook used the data to map competitors, copy features, and maintain control over the social media market – and also targeted underage users for data collection – raises additional ethical questions.

“Even a decade later, Onavo remains a case study in how ‘data is power’ and how far companies are willing to go to get it”, the publication concludes.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.