Monday, August 4, 2025

Polaris of Enlightenment

“Many misleading claims about Chat Control 2.0”

Mass surveillance

Ylva Johansson chooses to ignore the fact that a mass surveillance proposal requires mass surveillance, Karl Emil Nikka, IT security expert, writes.

Published 28 September 2023
IT security expert Karl Emil Nikka. EU Commissioner Ylva Johansson.
6 minute read
This is an opinion piece. The author is responsible for the views expressed in the article.

One of the topics discussed in the last week’s episode of Medierna i P1 was the European Commission’s controversial mass surveillance proposal Chat Control 2.0 and its consequences for journalists. The episode featured EU Commissioner Ylva Johansson, IT and media lawyer Daniel Westman and Anne Lagercrantz, President of the Swedish Publishers Association.

Westman and Lagercrantz were critical of the mass surveillance proposal, partly because of the consequences for the protection of sources. The Swedish Association of Journalists and the Swedish Newspaper Publishers have previously warned about the consequences of the proposal for the same reasons.

Comically, the pre-recorded interview began with Johansson asking if she could call Martina Pierrou, the interviewing journalist, via Signal or Whatsapp instead.

At the time of the interview, Johansson and Pierrou were able to talk via Signal, but if the mass surveillance proposal goes through, that possibility will disappear. In a response to me on X (Twitter), Signal’s CEO announced that they will leave the EU if they are forced to build backdoors into their app.

This is a very wise decision on Signal’s part as such backdoors undermine the safety and security of children and adults around the world. The rest of the world should not have to suffer because we in Europe are unable to stop EU proposals that violate human rights, the Convention on the Rights of the Child and our own EU Charter.

Below is an analysis of all the statements made by Johansson in the interview. The quotes are printed in full. The time codes link directly to the paragraphs in the section where the claims were made.

Incorrect suggestion of a requirement for a court decision

When asked about what the bill means in practice (18:55), Johansson repeated her recurring lie that a court order would be required to scan communications. She explained the practical implications of the proposal with the following sentence.

“To force the companies to make risk assessments, to take measures to ensure that their services are not used for this terrible crime and ultimately to make it possible, by court order, to also allow the scanning of communications to find these abuses.” – Ylva Johansson (2023-09-23)

Pierrou followed up with a remark that the proposal may require scanning without suspicion of crime against any individual (19.24). Ylva Johansson responded as follows.

“No, scanning will take place when there is a risk that a certain service is being used extensively to spread these criminal offenses. Then a court can decide that scanning is permitted and necessary.” – Ylva Johansson (2023-09-23)

The suggestion that a court decision would be required is incorrect. Johansson made the same claim in the debate against me in Svenska Dagbladet from April this year (the only debate in the Swedish media that Johansson has participated in). I then offered to correct her claim myself, in order to investigate whether she knew that her proposal did not require a court decision. The proposal also accepts decisions from administrative authorities. Johansson knew this. Nevertheless, she repeated the lie in the interview in SVT Aktuellt (April 2023), Ekot’s Saturday interview (June 2023) and now today in Medierna i P1.

Omitted consequence

In the answer to the same question, Johansson omitted the most crucial point, namely that backdoors are a prerequisite for the scanning of end-to-end encrypted conversations to be done at all. Once these backdoors are in place, they can be abused and cause data leaks. Other states, such as the US where most of the affected services are based, can use the backdoors to scan for content they are interested in.

The proposal states that service providers may only use their position to scan for child abuse material and grooming attempts. Even if we ignore the likely purpose creep, it doesn’t matter. Today, we have technical protections that ensure that our end-to-end encrypted conversations are impossible to intercept. The European Commission wants to replace these technical protections with legal restrictions on what the new backdoors can (and cannot) be used for.

This naivety is unprecedented. It is incomprehensible to me how the EU can believe that the US would allow American companies to install back doors that are limited to the EU’s prescribed use. As a thought experiment, we can consider how the EU would react if the US tried to do the same to our companies.

If we take into account the highly likely purpose creep, the situation gets even worse. We only have to go back to 2008 to demonstrate this. At that time, the FRA debate was in full swing and FRA Director General Ingvar Åkesson wrote a debate article in Svenska Dagbladet with the following memorable words.

“FRA cannot spy on domestic phenomena. /…/ Yet the idea is being cultivated that FRA should listen to all Swedes’ phone calls, read their e-mails and text messages. A disgusting idea. How can so many people believe that a democratically elected parliament would wish its people so ill?” – Ingvar Åkesson (2008-06-29)

15 years later, Åkesson can hopefully understand why we thought that a democratically elected parliament could want its people so badly. Right now exactly this “disgusting idea” (the Director General’s choice of words) is being proposed.

Belief in the existence of non-existent technologies

Pierrou then asked how the solution would actually work. Pierrou pointed out that “according to an opinion from the European Data Protection Board, the technology required by the proposal does not exist today” (19.55).

Johansson responded with a quote that will go down in history.

“I believe that there is. But my bill is technology-neutral and that means that we set standards for what the technology must be able to do and what high standards of integrity the technology must meet.” – Ylva Johansson (2023-09-23)

Here Johansson again shows that she based her proposal on incorrect assumptions about how technology works. After having been refuted by the world’s experts, she is now forced to switch to opinion arguments such as “I believe it exists”.

Whether technology exists (or can exist) is of course not a matter of opinion. It is, always has been, and always will be technically impossible to scan the content of properly end-to-end encrypted conversations.

To smooth over the embarrassment, Johansson pointed out that the bill is technology-neutral. This may sound good, but it says nothing in the context. Setting standards for what technology must do is only embarrassing when it is done without first examining what is practically possible.

If service providers of end-to-end encrypted services are to be able to scan the content of conversations, they must build in backdoors. The backdoors allow them to scan the content before it is encrypted and after it has been decrypted. Without backdoors, scanning is and remains technically impossible.

Opinion on mass surveillance in mass surveillance proposals

Pierrou concluded the interview by asking what Johansson thought about the image of the proposal being painted as a mass surveillance proposal (20.19). Johansson then answered the following.

“Yes, that is a completely wrong picture. It is not about anyone monitoring at all.” – Ylva Johansson (2023-09-23)

The definition of mass surveillance should be that the masses are monitored (as opposed to targeted surveillance against selected suspects). As highlighted by Pierrou in a previous question, the Chat Control 2.0 scan does not require any suspicion of crime against individuals. Service providers should monitor what the masses write and say on the platforms. Service providers will report suspicious conversations to the new EU centre to be set up in The Hague.

The proposal is thus, by definition, a mass surveillance proposal.

However, Johansson chose to ignore the fact that a mass surveillance proposal requires mass surveillance. Instead, she tried to dismiss the criticism with the following argument and a pat on her own shoulder (20.34).

“It is obvious that when you are a bit of a pioneer, as I am in this case, you have to expect that you will also be questioned.” – Ylva Johansson (2023-09-23)

Unfortunately, I must crush Commissioner Johansson’s self-image and state that she has never been questioned for being a pioneer. Johansson is not even a pioneer in the field, something she herself should know.

It has barely been 30 years since the Stasi was disbanded.

 

Karl Emil Nikka

 


This article is republished from nikkasystems.com under CC BY 4.0.

About the author

Karl Emil Nikka is the founder of Nikka Systems, Security Profile of the Year 2021, author and a IT security expert.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Amazon acquires AI company that records everything you say

Mass surveillance

Published 27 July 2025
– By Editorial Staff
3 minute read

Tech giant Amazon has acquired the Swedish AI company Bee, which develops wearable devices that continuously record users’ conversations. The deal signals Amazon’s ambitions to expand within AI-driven hardware beyond its voice-controlled home assistants.

The acquisition was confirmed by Bee founder Maria de Lourdes Zollo in a LinkedIn post, while Amazon told tech site TechCrunch that the deal has not yet been completed. Bee employees have been offered positions within Amazon.

AI wristband that listens constantly

Bee, which raised €6.4 million in venture capital last year, manufactures both a standalone wristband similar to Fitbit and an Apple Watch app. The product costs €46 (approximately $50) plus a monthly subscription of €17 ($18).

The device records everything it hears – unless the user manually turns it off – with the goal of listening to conversations to create reminders and to-do lists. According to the company’s website, they want “everyone to have access to a personal, ambient intelligence that feels less like a tool and more like a trusted companion.”

Bee has previously expressed plans to create a “cloud phone” that mirrors the user’s phone and gives the device access to accounts and notifications, which would enable reminders about events or sending messages.

Competitors struggle in the market

Other companies like Rabbit and Humane AI have tried to create similar AI-driven wearable devices but so far without major success. However, Bee’s device is significantly more affordable than competitors’ – the Humane AI Pin cost €458 – making it more accessible to curious consumers who don’t want to make a large financial investment.

The acquisition marks Amazon’s interest in wearable AI devices, a different direction from the company’s voice-controlled home assistants like Echo speakers. Meanwhile, ChatGPT creator OpenAI is working on its own AI hardware, while Meta is integrating its AI into smart glasses and Apple is rumored to be working on the same thing.

Privacy concerns remain

Products that continuously record the environment carry significant security and privacy risks. Different companies have varying policies for how voice recordings are processed, stored, and used for AI training.

In its current privacy policy, Bee says users can delete their data at any time and that audio recordings are not saved, stored, or used for AI training. However, the app does store data that the AI learns about the user, which is necessary for the assistant function.

Bee has previously indicated plans to only record voices from people who have verbally given consent. The company is also working on a feature that lets users define boundaries – both based on topic and location – that automatically pause the device’s learning. They also plan to build AI processing directly into the device, which generally involves fewer privacy risks than cloud-based data processing.

However, it’s unclear whether these policies will change when Bee is integrated into Amazon. Amazon has previously had mixed results when it comes to handling user data from customers’ devices.

The company has shared video clips with law enforcement from people’s Ring security cameras without the owner’s consent or court order. Ring also reached a settlement in 2023 with the Federal Trade Commission after allegations that employees and contractors had broad and unrestricted access to customers’ video recordings.

Now you’re forced to pay for Facebook or be tracked by Meta

Mass surveillance

Published 22 July 2025
– By Editorial Staff
2 minute read

Social media giant Meta is now implementing its criticized “pay or be tracked” model for Swedish users. Starting Thursday, Facebook users in Sweden and some other EU-countries are forced to choose between paying €7 per month for an ad-free experience or accepting extensive data collection. Meanwhile, the company faces daily fines from the EU if the model isn’t changed.

Swedish Facebook users have been greeted since Thursday morning with a new choice when logging into the platform. A message informs them that “you must make a choice to use Facebook” and explains that users “have a legal right to choose whether you want to consent to us processing your personal data to show you ads.”

Screenshot from Facebook.

The choice is between two alternatives: either pay €7 monthly for an ad-free Facebook account where personal data isn’t processed for advertising, or consent to Meta collecting and using personal data for targeted ads.

As a third alternative, “less personalized ads” is offered, which means Meta uses somewhat less personal data for advertising purposes.

Screenshot from Facebook.

Background in EU legislation

The introduction of the payment model comes after the European Commission in March launched investigations of Meta along with Apple and Google for suspected violations of the DMA (Digital Markets Act). For Meta’s part, the investigation specifically concerns the new payment model.

In April, Meta was fined under DMA legislation and ordered to pay €200 million in fines because the payment model was not considered to meet legal requirements. Meta has appealed the decision.

According to reports from Reuters at the end of June, the social media giant now risks daily penalties if the company doesn’t make necessary changes to its payment model to comply with EU regulations.

The new model represents Meta’s attempt to adapt to stricter European data legislation while the company tries to maintain its advertising revenue through the alternative payment route.

RFK Jr wants health trackers on every American within four years

Mass surveillance

Published 26 June 2025
– By Editorial Staff
"We think that wearables are a key to the MAHA agenda", Kennedy claims.
3 minute read

US Secretary of Health and Human Services Robert F. Kennedy Jr. has presented a plan for all Americans to wear body-monitoring technology that tracks their health in real time.

The measure is described as a crucial part of the national initiative MAHA – Make America Healthy Again – which aims to reverse America’s widespread public health crisis using modern technology.

During a hearing before the House Energy and Commerce Committee on Tuesday, Kennedy revealed that the Department of Health and Human Services (HHS) will launch one of its most extensive campaigns ever – to get Americans to wear so-called wearables, body-worn technology that collects health data around the clock.

– We’re about to launch the biggest advertising campaign in HHS history to encourage Americans to use wearables, Kennedy said.

Products mentioned in the initiative include FitBit, Oura Ring, and Apple Watch – popular devices that can measure heart rate, movement, sleep, and in some cases even blood glucose.

– It’s a way people can take control over their own health. They can take responsibility. They can see, as you know, what food is doing to their glucose levels, their heart rates, and a number of other metrics, as they eat it, he explained in a statement also published on the X platform.

“Key to the MAHA agenda”

Kennedy emphasized that he sees the technology as a crucial part of his vision:

– We think that wearables are a key to the MAHA agenda of making America healthy again and my vision is that every American is wearing a wearable in four years.

The Secretary, who belongs to the influential Kennedy family, often emphasizes individual responsibility for health but also links the issue to national security. During his Senate hearing, he described America’s obesity epidemic which now affects about 40 percent of the population as a threat to military readiness.

“Reduce global metabolic suffering”

One of the leading advocates for this type of technology is also President Trump’s nominee for Surgeon General, Dr. Casey Means. She is co-founder of Levels, a company that develops and sells Continuous Glucose Monitors (CGM) sensors attached directly to the skin that send blood glucose values to an app in real time.

Means claims in a blog post that “these small plastic discs” can “reduce global metabolic suffering” and provide much-needed help to the “93.2 percent of people” suffering from metabolic issues.

The food industry is also affected by MAHA. Kennedy recently revealed that Starbucks will make changes to its menu in line with the agenda – even though the company already avoids several common additives such as artificial colors, flavors, and high-fructose corn syrup.

Earlier this year, Kennedy implemented a ban on artificial colors in U.S. food production one of his first major interventions as Secretary. Critics have questioned both methods and priorities in the MAHA policy, but Kennedy sees it as a first step toward a healthier and more responsible nation.

Concerns about mass surveillance

The use of wearable health technology has raised questions about users’ right to privacy. Most health trackers collect large amounts of sensitive information including heart rate, sleep patterns, movement, and blood glucose levels stored in apps connected to companies.

Critics argue that there is a lack of clear transparency in how this data is used, shared, or sold, and that state-encouraged collection of health data risks blurring the line between voluntary health monitoring and systematic surveillance.

While Kennedy emphasizes voluntariness, some analysts warn that large-scale campaigns and technology adaptations by major companies may create indirect pressure to participate.

As more institutions such as employers, schools, or businesses adapt to health tracking, there is a risk that those who opt out may be seen as deviant, receive worse conditions, or be excluded from parts of society.

AI surveillance in Swedish workplaces sparks outrage

Mass surveillance

Published 4 June 2025
– By Editorial Staff
In practice, it is possible to analyze not only employees' productivity - but also their facial expressions, voices and emotions.
2 minute read

The rapid development of artificial intelligence has not only brought advantages – it has also created new opportunities for mass surveillance, both in society at large and in the workplace.

Even today, unscrupulous employers use AI to monitor and map every second of their employees’ working day in real time – a development that former Social Democratic politician Kari Parman warns against and calls for decisive action to combat.

In an opinion piece in the Stampen-owned newspaper GP, he argues that AI-based surveillance of employees poses a threat to staff privacy and calls on the trade union movement to take action against this development.

Parman paints a bleak picture of how AI is used to monitor employees in Swedish workplaces, where technology analyzes everything from voices and facial expressions to productivity and movement patterns – often without the employees’ knowledge or consent.

It’s a totalitarian control system – in capitalist packaging”, he writes, continuing:

There is something deeply disturbing about the idea that algorithms will analyze our voices, our facial expressions, our productivity – second by second – while we work”.

“It’s about power and control”

According to Parman, there is a significant risk that people in digital capitalism will be reduced to mere data points, giving employers disproportionate power over their employees.

He sees AI surveillance as more than just a technical issue and warns that this development undermines the Swedish model, which is based on balance and respect between employers and employees.

It’s about power. About control. About squeezing every last ounce of ‘efficiency’ out of people as if we were batteries”.

If trade unions fail to act, Parman believes, they risk becoming irrelevant in a working life where algorithms are taking over more and more of the decision-making.

To stop this trend, he lists several concrete demands. He wants to see a ban on AI-based individual surveillance in the workplace and urges unions to introduce conditions in collective agreements to review and approve new technology.

Kari Parman previously represented the Social Democrats in Gnosjö. Photo: Kari Parman/FB

“Reduced to an algorithm’s margin of error”

He also calls for training for safety representatives and members, as well as political regulations from the state.

No algorithm should have the right to analyze our performance, movements, or feelings”, he declares.

Parman emphasizes that AI surveillance not only threatens privacy but also creates a “psychological iron cage” where employees constantly feel watched, blurring the line between work and private life.

At the end of the article, the Social Democrat calls on the trade union movement to take responsibility and lead the resistance against the misuse of AI in the workplace.

He sees it as a crucial issue for the future of working life and human dignity at work.

If we don’t stand up now, we will be alone when it is our turn to be reduced to an algorithm’s margin of error”, he concludes.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.