Thursday, October 9, 2025

Polaris of Enlightenment

“Many misleading claims about Chat Control 2.0”

Mass surveillance

Ylva Johansson chooses to ignore the fact that a mass surveillance proposal requires mass surveillance, Karl Emil Nikka, IT security expert, writes.

Published 28 September 2023
IT security expert Karl Emil Nikka. EU Commissioner Ylva Johansson.
6 minute read
This is an opinion piece. The author is responsible for the views expressed in the article.

One of the topics discussed in the last week’s episode of Medierna i P1 was the European Commission’s controversial mass surveillance proposal Chat Control 2.0 and its consequences for journalists. The episode featured EU Commissioner Ylva Johansson, IT and media lawyer Daniel Westman and Anne Lagercrantz, President of the Swedish Publishers Association.

Westman and Lagercrantz were critical of the mass surveillance proposal, partly because of the consequences for the protection of sources. The Swedish Association of Journalists and the Swedish Newspaper Publishers have previously warned about the consequences of the proposal for the same reasons.

Comically, the pre-recorded interview began with Johansson asking if she could call Martina Pierrou, the interviewing journalist, via Signal or Whatsapp instead.

At the time of the interview, Johansson and Pierrou were able to talk via Signal, but if the mass surveillance proposal goes through, that possibility will disappear. In a response to me on X (Twitter), Signal’s CEO announced that they will leave the EU if they are forced to build backdoors into their app.

This is a very wise decision on Signal’s part as such backdoors undermine the safety and security of children and adults around the world. The rest of the world should not have to suffer because we in Europe are unable to stop EU proposals that violate human rights, the Convention on the Rights of the Child and our own EU Charter.

Below is an analysis of all the statements made by Johansson in the interview. The quotes are printed in full. The time codes link directly to the paragraphs in the section where the claims were made.

Incorrect suggestion of a requirement for a court decision

When asked about what the bill means in practice (18:55), Johansson repeated her recurring lie that a court order would be required to scan communications. She explained the practical implications of the proposal with the following sentence.

“To force the companies to make risk assessments, to take measures to ensure that their services are not used for this terrible crime and ultimately to make it possible, by court order, to also allow the scanning of communications to find these abuses.” – Ylva Johansson (2023-09-23)

Pierrou followed up with a remark that the proposal may require scanning without suspicion of crime against any individual (19.24). Ylva Johansson responded as follows.

“No, scanning will take place when there is a risk that a certain service is being used extensively to spread these criminal offenses. Then a court can decide that scanning is permitted and necessary.” – Ylva Johansson (2023-09-23)

The suggestion that a court decision would be required is incorrect. Johansson made the same claim in the debate against me in Svenska Dagbladet from April this year (the only debate in the Swedish media that Johansson has participated in). I then offered to correct her claim myself, in order to investigate whether she knew that her proposal did not require a court decision. The proposal also accepts decisions from administrative authorities. Johansson knew this. Nevertheless, she repeated the lie in the interview in SVT Aktuellt (April 2023), Ekot’s Saturday interview (June 2023) and now today in Medierna i P1.

Omitted consequence

In the answer to the same question, Johansson omitted the most crucial point, namely that backdoors are a prerequisite for the scanning of end-to-end encrypted conversations to be done at all. Once these backdoors are in place, they can be abused and cause data leaks. Other states, such as the US where most of the affected services are based, can use the backdoors to scan for content they are interested in.

The proposal states that service providers may only use their position to scan for child abuse material and grooming attempts. Even if we ignore the likely purpose creep, it doesn’t matter. Today, we have technical protections that ensure that our end-to-end encrypted conversations are impossible to intercept. The European Commission wants to replace these technical protections with legal restrictions on what the new backdoors can (and cannot) be used for.

This naivety is unprecedented. It is incomprehensible to me how the EU can believe that the US would allow American companies to install back doors that are limited to the EU’s prescribed use. As a thought experiment, we can consider how the EU would react if the US tried to do the same to our companies.

If we take into account the highly likely purpose creep, the situation gets even worse. We only have to go back to 2008 to demonstrate this. At that time, the FRA debate was in full swing and FRA Director General Ingvar Åkesson wrote a debate article in Svenska Dagbladet with the following memorable words.

“FRA cannot spy on domestic phenomena. /…/ Yet the idea is being cultivated that FRA should listen to all Swedes’ phone calls, read their e-mails and text messages. A disgusting idea. How can so many people believe that a democratically elected parliament would wish its people so ill?” – Ingvar Åkesson (2008-06-29)

15 years later, Åkesson can hopefully understand why we thought that a democratically elected parliament could want its people so badly. Right now exactly this “disgusting idea” (the Director General’s choice of words) is being proposed.

Belief in the existence of non-existent technologies

Pierrou then asked how the solution would actually work. Pierrou pointed out that “according to an opinion from the European Data Protection Board, the technology required by the proposal does not exist today” (19.55).

Johansson responded with a quote that will go down in history.

“I believe that there is. But my bill is technology-neutral and that means that we set standards for what the technology must be able to do and what high standards of integrity the technology must meet.” – Ylva Johansson (2023-09-23)

Here Johansson again shows that she based her proposal on incorrect assumptions about how technology works. After having been refuted by the world’s experts, she is now forced to switch to opinion arguments such as “I believe it exists”.

Whether technology exists (or can exist) is of course not a matter of opinion. It is, always has been, and always will be technically impossible to scan the content of properly end-to-end encrypted conversations.

To smooth over the embarrassment, Johansson pointed out that the bill is technology-neutral. This may sound good, but it says nothing in the context. Setting standards for what technology must do is only embarrassing when it is done without first examining what is practically possible.

If service providers of end-to-end encrypted services are to be able to scan the content of conversations, they must build in backdoors. The backdoors allow them to scan the content before it is encrypted and after it has been decrypted. Without backdoors, scanning is and remains technically impossible.

Opinion on mass surveillance in mass surveillance proposals

Pierrou concluded the interview by asking what Johansson thought about the image of the proposal being painted as a mass surveillance proposal (20.19). Johansson then answered the following.

“Yes, that is a completely wrong picture. It is not about anyone monitoring at all.” – Ylva Johansson (2023-09-23)

The definition of mass surveillance should be that the masses are monitored (as opposed to targeted surveillance against selected suspects). As highlighted by Pierrou in a previous question, the Chat Control 2.0 scan does not require any suspicion of crime against individuals. Service providers should monitor what the masses write and say on the platforms. Service providers will report suspicious conversations to the new EU centre to be set up in The Hague.

The proposal is thus, by definition, a mass surveillance proposal.

However, Johansson chose to ignore the fact that a mass surveillance proposal requires mass surveillance. Instead, she tried to dismiss the criticism with the following argument and a pat on her own shoulder (20.34).

“It is obvious that when you are a bit of a pioneer, as I am in this case, you have to expect that you will also be questioned.” – Ylva Johansson (2023-09-23)

Unfortunately, I must crush Commissioner Johansson’s self-image and state that she has never been questioned for being a pioneer. Johansson is not even a pioneer in the field, something she herself should know.

It has barely been 30 years since the Stasi was disbanded.

 

Karl Emil Nikka

 


This article is republished from nikkasystems.com under CC BY 4.0.

About the author

Karl Emil Nikka is the founder of Nikka Systems, Security Profile of the Year 2021, author and a IT security expert.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Telenor faces lawsuit over human rights abuses in Myanmar

Mass surveillance

Published yesterday 11:00
– By Editorial Staff
Telenor's information chief calls the demand a "PR stunt" and argues that the matter has already been handled by police and the judicial system.
3 minute read

Over a thousand people may have been persecuted, tortured, arrested or killed when Norwegian telecommunications company Telenor handed over sensitive customer data to the military junta in Myanmar. Now victims and relatives are threatening to sue and demanding millions in damages.

On Monday, Telenor’s management received a notice of lawsuit where the compensation claim is motivated by the telecom company illegally sharing sensitive personal data with Myanmar’s military junta.

“We ask for a response on whether the basis for the claim is disputed as soon as possible, but no later than within two weeks”, the letter stated.

Behind the claim stands the Dutch organization Centre for Research on Multinational Corporations (Somo) together with several Myanmar civil society organizations.

After the military coup in February 2021, the junta forced telecom operators like Telenor to hand over sensitive information about their customers. The information was then used to identify, track and arrest regime critics and activists.

Politician executed

Among those affected is a prominent politician and Telenor customer, and after the company handed over the data, the man was arrested, sentenced to death and executed in prison.

— We know that the potential group of victims is more than 1,000 people, says Joseph Wilde-Ramsing, director and lead negotiator at Somo to Norwegian business newspaper Dagens Næringsliv.

He emphasizes that some of the victims are dead and executed, while several are arrested.

— We are in contact with their family members and demand financial compensation from Telenor for what they have been subjected to.

Claim worth millions

Lawyer Jan Magne Langseth, partner at Norwegian law firm Simonsen Vogt Wiig, represents Somo in the case. He states that the claim will be extensive.

— We have not yet set an exact figure, but there is little doubt it will amount to several hundred million kroner, he says.

Both individuals and organizations working for the democracy movement in Myanmar are demanding compensation.

— We have the number lists that were handed over to the junta, but we don’t have all the names of the subscribers yet, says Langseth.

The notice establishes that Telenor systematically handed over personal data to the military junta, well aware that this would lead to human rights violations – including persecution, arbitrary arrests and elimination of opponents.

“This can be documented with extensive evidence”, the document states.

Telenor: “No good choices”

Telenor’s communications director David Fidjeland dismisses the matter and claims that the issue has already been resolved.

“The tragic developments in Myanmar have been the subject of several investigations within the police and judiciary without leading anywhere. Telenor Myanmar found itself in a terrible and tragic situation and unfortunately had no good choices”, he writes in an email and continues:

“That journalists from Bangkok and Kuala Lumpur to Marienlyst [Telenor’s headquarters in Norway] received this notice long before we ourselves received it unfortunately says something about where Somo has its focus. This unfortunately seems more like a PR stunt in a tragic matter than a serious communication”.

Sold operations in 2022

Telenor received a mobile license in Myanmar in 2014. In a short time, the company became a major mobile operator with over 18 million customers in the country. After the military coup in February 2021, when the previous government was overthrown, Telenor chose to sell its mobile operations in Myanmar to Lebanese M1 Group – including customer data. The sale was completed in March 2022.

According to local media, M1 Group’s local partner has close ties to the military junta.

Lawyer Langseth addresses the question of whether a refusal to hand over data would have affected local employees.

— The employees at Telenor Myanmar did not need to be involved. It could have been controlled from Norway or other countries in the group. Witnesses have told us that there was internal resistance among several of the key local employees at Telenor Myanmar against handing over data to the junta, he says.

Microsoft stops Israel’s use of technology for mass surveillance of Palestinians

The genocide in Gaza

Published 27 September 2025
– By Editorial Staff
Microsoft's research and development division in Matam Business Park in Haifa, Israel.
5 minute read

The tech giant has shut down the Israeli military’s access to cloud services and AI tools following revelations about a secret spy project that collected millions of phone calls from Palestinian civilians.

Microsoft has shut down the Israeli military’s access to technology that was used to power an extensive surveillance system that collected millions of Palestinian civilian phone calls daily from Gaza and the West Bank, The Guardian can reveal.

Microsoft informed Israeli officials last week that Unit 8200, the military’s elite intelligence agency, had violated the company’s terms of service by storing the enormous amount of surveillance data on its Azure cloud platform, according to sources with insight into the situation.

The decision to cut off Unit 8200’s ability to use parts of the technology is a direct result of an investigation that The Guardian published last month. It revealed how Azure was used to store and process the enormous amount of Palestinian communications in a mass surveillance program.

Secret project after summit meeting

In a joint investigation with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language newspaper Local Call, The Guardian revealed how Microsoft and Unit 8200 had worked together on a plan to move large volumes of sensitive intelligence material to Azure.

The project began after a 2021 meeting between Microsoft CEO Satya Nadella and the unit’s then-commander Yossi Sariel.

In response to the investigation, Microsoft ordered an urgent external review to examine its relationship with Unit 8200. The initial results have now led to the company cutting off the unit’s access to certain of its cloud storage and AI services.

Equipped with Azure’s virtually unlimited storage capacity and computing power, Unit 8200 had built an indiscriminate new system that allowed its intelligence officers to collect, replay, and analyze the content of mobile calls from an entire population.

The project was so extensive that, according to sources from Unit 8200 – which is equivalent to the US National Security Agency – an internal motto emerged that captured its scope and ambition: “One million calls per hour.”

According to several sources, the enormous archive of intercepted calls – amounting to as much as 8,000 terabytes of data – was held in a Microsoft data center in the Netherlands. Within days of The Guardian publishing the investigation, Unit 8200 appears to have quickly moved surveillance data out of the country.

Data moved to Amazon

According to sources with knowledge of the enormous data transfer out of the EU country, it occurred in early August. Intelligence sources said that Unit 8200 planned to transfer data to Amazon Web Services cloud platform. Neither the Israel Defense Forces (IDF) nor Amazon responded to a request for comment.

Microsoft’s extraordinary decision to terminate the spy agency’s access to key technology was taken amid pressure from employees and investors over its work for the Israeli military and the role its technology has played in the nearly two-year-long offensive in Gaza.

A UN commission of inquiry recently concluded that Israel had committed genocide in Gaza, an allegation denied by Israel but supported by many experts in international law.

The Guardian’s joint investigation led to protests at Microsoft’s US headquarters and one of its European data centers, as well as demands from a worker-led campaign group, No Azure for Apartheid, to end all ties to the Israeli military.

Clear message from Microsoft

On Thursday, Microsoft Vice Chairman and President Brad Smith informed staff about the decision. In an email that The Guardian has seen, he said the company had “terminated and deactivated a set of services to a unit within Israel’s Ministry of Defense,” including cloud storage and AI services.

Smith wrote: “We do not provide technology to facilitate mass surveillance of civilians. We have applied this principle in all countries around the world, and we have insisted on it repeatedly for more than two decades.”

The decision brings an abrupt end to a three-year period during which the spy agency operated its surveillance program using Microsoft’s technology.

Unit 8200 used its own extensive surveillance capabilities to intercept and collect the calls. The spy agency then used a customized and segregated area within the Azure platform, enabling data to be retained for longer periods and analyzed with AI-driven techniques.

Used for bombing targets in Gaza

Although the initial focus of the surveillance system was the West Bank, where an estimated 3 million Palestinians live under Israeli military occupation, intelligence sources said the cloud-based storage platform had been used in the Gaza offensive to facilitate the preparation of deadly airstrikes.

The revelations highlighted how Israel has relied on services and infrastructure from major US tech companies to support its bombardment of Gaza, which has killed more than 65,000 Palestinians, mostly civilians, and created a deep humanitarian crisis and famine catastrophe.

According to a document seen by The Guardian, a senior Microsoft executive told Israel’s Ministry of Defense last week:

While our review is ongoing, we have at this point identified evidence supporting parts of The Guardian’s reporting.

The executive told Israeli officials that Microsoft “is not in the business of facilitating mass surveillance of civilians” and informed them that it would “deactivate” access to services supporting Unit 8200’s surveillance project and shut down its use of certain AI products.

First time since the war began

The termination is the first known case of a US tech company withdrawing services provided to the Israeli military since the beginning of its war in Gaza.

The decision has not affected Microsoft’s broader commercial relationship with the IDF, which is a long-standing client and will retain access to other services. The termination will raise questions within Israel about the policy of keeping sensitive military data in a third-party cloud operated abroad.

Last month’s revelations about Unit 8200’s use of Microsoft technology followed an earlier investigation by The Guardian and its partners about the broader relationship between the company and the Israeli military.

That story, published in January and based on leaked files, showed how the IDF’s reliance on Azure and its AI systems increased dramatically in the most intensive phase of its Gaza campaign.

Following that report, Microsoft launched its first review of how the IDF uses its services. It said in May that it had “found no evidence to date” that the military had failed to comply with its terms of service, or used Azure and its AI technology “to target or harm people” in Gaza.

But The Guardian’s investigation with +972 and Local Call published in August, which revealed that the cloud-based surveillance project had been used to investigate and identify bombing targets in Gaza, led to the company reassessing its conclusions.

The revelations caused alarm among senior Microsoft executives and raised concerns that some of its Israel-based employees may not have been fully transparent about their knowledge of how Unit 8200 used Azure when questioned as part of the review.

The company said its executives, including Nadella, were not aware that Unit 8200 planned to use, or ultimately used, Azure to store the content of intercepted Palestinian calls.

Microsoft then launched its second and more targeted review, which was overseen by lawyers at the US firm Covington & Burling. In his note to staff, Smith said the investigation did not have access to any customer data but its findings were based on a review of internal Microsoft documents, emails and messages between personnel.

I want to note our appreciation for The Guardian’s reporting, Smith wrote, noting that it had illuminated “information we could not access given our customer confidentiality commitments.” He added: “Our review is ongoing.”

OpenAI monitors ChatGPT chats – can report users to police

Mass surveillance

Published 20 September 2025
– By Editorial Staff
What has been perceived as private AI conversations can now end up with police.
2 minute read

OpenAI has quietly begun monitoring users’ ChatGPT conversations and can report content to law enforcement authorities.

The revelation comes after incidents where AI chatbots have been linked to self-harm behavior, delusions, hospitalizations and suicide – what experts call “AI psychosis”.

In a blog post, the company acknowledges that they systematically scan users’ messages. When the system detects users planning to harm others, the conversations are directed to a review team that can suspend accounts and contact police.

“If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement”, writes OpenAI.

The new policy means in practice that millions of users have their conversations scanned and that what many perceived as private conversations with an AI are now subject to systematic surveillance where content can be forwarded to authorities.

Tech journalist Noor Al-Sibai at Futurism points out that OpenAI’s statement is “short and vague” and that the company does not specify exactly what types of conversations could lead to police reports.

“It remains unclear which exact types of chats could result in user conversations being flagged for human review, much less getting referred to police”, she writes.

Security problems ignored

Ironically, ChatGPT has proven vulnerable to “jailbreaks” where users have been able to trick the system into giving instructions for building neurotoxins or step-by-step guides for suicide. Instead of addressing these fundamental security flaws, OpenAI is now choosing extensive surveillance of users.

The surveillance stands in sharp contrast to the tech company’s actions in the lawsuit against the New York Times, where the company “steadfastly rejected” demands to hand over ChatGPT logs citing user privacy.

“It’s also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it’s monitoring user chats and potentially sharing them with the fuzz”, Al-Sibai notes.

May be forced to hand over chats

OpenAI CEO Sam Altman has recently acknowledged that ChatGPT does not offer the same confidentiality as conversations with real therapists or lawyers, and due to the lawsuit, the company may be forced to hand over user chats to various courts.

“OpenAI is stuck between a rock and a hard place”, writes Al-Sibai. The company is trying to handle the PR disaster from users who have suffered mental health crises, but since they “clearly having trouble controlling its own tech”, they fall back on “heavy-handed moderation that flies in the face of its own CEO’s promises”.

The tech company announces that they are “currently not” reporting self-harm cases to police, but the wording suggests that even this could change. The company has also not responded to requests to clarify what criteria are used for surveillance.

Wifi signals can identify people with 95 percent accuracy

Mass surveillance

Published 21 August 2025
– By Editorial Staff
2 minute read

Italian researchers have developed a technique that can track and identify individuals by analyzing how wifi signals reflect off human bodies. The method works even when people change clothes and can be used for surveillance.

Researchers at La Sapienza University in Rome have developed a new method for identifying and tracking people using wifi signals. The technique, which the researchers call “WhoFi”, can recognize people with an accuracy rate of up to 95 percent, reports Sweclockers.

The method is based on the fact that wifi signals reflect and refract in different ways when they hit human bodies. By analyzing these reflection patterns using machine learning and artificial neural networks, researchers can create unique “fingerprints” for each individual.

Works despite clothing changes

Experiments show that these digital fingerprints are stable enough to identify people even when they change clothes or carry backpacks. The average recognition rate is 88 percent, which researchers say is comparable to other automatic identification methods.

The research results were published in mid-July and describe how the technology could be used in surveillance contexts. According to the researchers, WhoFi can solve the problem of re-identifying people who were first observed via a surveillance camera in one location and then need to be found in footage from cameras in other locations.

Can be used for surveillance

The technology opens up new possibilities in security surveillance, but simultaneously raises questions about privacy and personal security. The fact that wifi networks, which are ubiquitous in today’s society, can be used to track people without their knowledge represents a new dimension of digital surveillance.

The researchers present their discovery as a breakthrough in the field of automatic person identification, but do not address the ethical implications that the technology may have for individuals’ privacy.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.