Saturday, September 27, 2025

Polaris of Enlightenment

Microsoft stops Israel’s use of technology for mass surveillance of Palestinians

The genocide in Gaza

Published today 17:41
– By Editorial Staff
Microsoft's research and development division in Matam Business Park in Haifa, Israel.
5 minute read

The tech giant has shut down the Israeli military’s access to cloud services and AI tools following revelations about a secret spy project that collected millions of phone calls from Palestinian civilians.

Microsoft has shut down the Israeli military’s access to technology that was used to power an extensive surveillance system that collected millions of Palestinian civilian phone calls daily from Gaza and the West Bank, The Guardian can reveal.

Microsoft informed Israeli officials last week that Unit 8200, the military’s elite intelligence agency, had violated the company’s terms of service by storing the enormous amount of surveillance data on its Azure cloud platform, according to sources with insight into the situation.

The decision to cut off Unit 8200’s ability to use parts of the technology is a direct result of an investigation that The Guardian published last month. It revealed how Azure was used to store and process the enormous amount of Palestinian communications in a mass surveillance program.

Secret project after summit meeting

In a joint investigation with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language newspaper Local Call, The Guardian revealed how Microsoft and Unit 8200 had worked together on a plan to move large volumes of sensitive intelligence material to Azure.

The project began after a 2021 meeting between Microsoft CEO Satya Nadella and the unit’s then-commander Yossi Sariel.

In response to the investigation, Microsoft ordered an urgent external review to examine its relationship with Unit 8200. The initial results have now led to the company cutting off the unit’s access to certain of its cloud storage and AI services.

Equipped with Azure’s virtually unlimited storage capacity and computing power, Unit 8200 had built an indiscriminate new system that allowed its intelligence officers to collect, replay, and analyze the content of mobile calls from an entire population.

The project was so extensive that, according to sources from Unit 8200 – which is equivalent to the US National Security Agency – an internal motto emerged that captured its scope and ambition: “One million calls per hour.”

According to several sources, the enormous archive of intercepted calls – amounting to as much as 8,000 terabytes of data – was held in a Microsoft data center in the Netherlands. Within days of The Guardian publishing the investigation, Unit 8200 appears to have quickly moved surveillance data out of the country.

Data moved to Amazon

According to sources with knowledge of the enormous data transfer out of the EU country, it occurred in early August. Intelligence sources said that Unit 8200 planned to transfer data to Amazon Web Services cloud platform. Neither the Israel Defense Forces (IDF) nor Amazon responded to a request for comment.

Microsoft’s extraordinary decision to terminate the spy agency’s access to key technology was taken amid pressure from employees and investors over its work for the Israeli military and the role its technology has played in the nearly two-year-long offensive in Gaza.

A UN commission of inquiry recently concluded that Israel had committed genocide in Gaza, an allegation denied by Israel but supported by many experts in international law.

The Guardian’s joint investigation led to protests at Microsoft’s US headquarters and one of its European data centers, as well as demands from a worker-led campaign group, No Azure for Apartheid, to end all ties to the Israeli military.

Clear message from Microsoft

On Thursday, Microsoft Vice Chairman and President Brad Smith informed staff about the decision. In an email that The Guardian has seen, he said the company had “terminated and deactivated a set of services to a unit within Israel’s Ministry of Defense,” including cloud storage and AI services.

Smith wrote: “We do not provide technology to facilitate mass surveillance of civilians. We have applied this principle in all countries around the world, and we have insisted on it repeatedly for more than two decades.”

The decision brings an abrupt end to a three-year period during which the spy agency operated its surveillance program using Microsoft’s technology.

Unit 8200 used its own extensive surveillance capabilities to intercept and collect the calls. The spy agency then used a customized and segregated area within the Azure platform, enabling data to be retained for longer periods and analyzed with AI-driven techniques.

Used for bombing targets in Gaza

Although the initial focus of the surveillance system was the West Bank, where an estimated 3 million Palestinians live under Israeli military occupation, intelligence sources said the cloud-based storage platform had been used in the Gaza offensive to facilitate the preparation of deadly airstrikes.

The revelations highlighted how Israel has relied on services and infrastructure from major US tech companies to support its bombardment of Gaza, which has killed more than 65,000 Palestinians, mostly civilians, and created a deep humanitarian crisis and famine catastrophe.

According to a document seen by The Guardian, a senior Microsoft executive told Israel’s Ministry of Defense last week:

While our review is ongoing, we have at this point identified evidence supporting parts of The Guardian’s reporting.

The executive told Israeli officials that Microsoft “is not in the business of facilitating mass surveillance of civilians” and informed them that it would “deactivate” access to services supporting Unit 8200’s surveillance project and shut down its use of certain AI products.

First time since the war began

The termination is the first known case of a US tech company withdrawing services provided to the Israeli military since the beginning of its war in Gaza.

The decision has not affected Microsoft’s broader commercial relationship with the IDF, which is a long-standing client and will retain access to other services. The termination will raise questions within Israel about the policy of keeping sensitive military data in a third-party cloud operated abroad.

Last month’s revelations about Unit 8200’s use of Microsoft technology followed an earlier investigation by The Guardian and its partners about the broader relationship between the company and the Israeli military.

That story, published in January and based on leaked files, showed how the IDF’s reliance on Azure and its AI systems increased dramatically in the most intensive phase of its Gaza campaign.

Following that report, Microsoft launched its first review of how the IDF uses its services. It said in May that it had “found no evidence to date” that the military had failed to comply with its terms of service, or used Azure and its AI technology “to target or harm people” in Gaza.

But The Guardian’s investigation with +972 and Local Call published in August, which revealed that the cloud-based surveillance project had been used to investigate and identify bombing targets in Gaza, led to the company reassessing its conclusions.

The revelations caused alarm among senior Microsoft executives and raised concerns that some of its Israel-based employees may not have been fully transparent about their knowledge of how Unit 8200 used Azure when questioned as part of the review.

The company said its executives, including Nadella, were not aware that Unit 8200 planned to use, or ultimately used, Azure to store the content of intercepted Palestinian calls.

Microsoft then launched its second and more targeted review, which was overseen by lawyers at the US firm Covington & Burling. In his note to staff, Smith said the investigation did not have access to any customer data but its findings were based on a review of internal Microsoft documents, emails and messages between personnel.

I want to note our appreciation for The Guardian’s reporting, Smith wrote, noting that it had illuminated “information we could not access given our customer confidentiality commitments.” He added: “Our review is ongoing.”

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

OpenAI monitors ChatGPT chats – can report users to police

Mass surveillance

Published 20 September 2025
– By Editorial Staff
What has been perceived as private AI conversations can now end up with police.
2 minute read

OpenAI has quietly begun monitoring users’ ChatGPT conversations and can report content to law enforcement authorities.

The revelation comes after incidents where AI chatbots have been linked to self-harm behavior, delusions, hospitalizations and suicide – what experts call “AI psychosis”.

In a blog post, the company acknowledges that they systematically scan users’ messages. When the system detects users planning to harm others, the conversations are directed to a review team that can suspend accounts and contact police.

“If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement”, writes OpenAI.

The new policy means in practice that millions of users have their conversations scanned and that what many perceived as private conversations with an AI are now subject to systematic surveillance where content can be forwarded to authorities.

Tech journalist Noor Al-Sibai at Futurism points out that OpenAI’s statement is “short and vague” and that the company does not specify exactly what types of conversations could lead to police reports.

“It remains unclear which exact types of chats could result in user conversations being flagged for human review, much less getting referred to police”, she writes.

Security problems ignored

Ironically, ChatGPT has proven vulnerable to “jailbreaks” where users have been able to trick the system into giving instructions for building neurotoxins or step-by-step guides for suicide. Instead of addressing these fundamental security flaws, OpenAI is now choosing extensive surveillance of users.

The surveillance stands in sharp contrast to the tech company’s actions in the lawsuit against the New York Times, where the company “steadfastly rejected” demands to hand over ChatGPT logs citing user privacy.

“It’s also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it’s monitoring user chats and potentially sharing them with the fuzz”, Al-Sibai notes.

May be forced to hand over chats

OpenAI CEO Sam Altman has recently acknowledged that ChatGPT does not offer the same confidentiality as conversations with real therapists or lawyers, and due to the lawsuit, the company may be forced to hand over user chats to various courts.

“OpenAI is stuck between a rock and a hard place”, writes Al-Sibai. The company is trying to handle the PR disaster from users who have suffered mental health crises, but since they “clearly having trouble controlling its own tech”, they fall back on “heavy-handed moderation that flies in the face of its own CEO’s promises”.

The tech company announces that they are “currently not” reporting self-harm cases to police, but the wording suggests that even this could change. The company has also not responded to requests to clarify what criteria are used for surveillance.

Wifi signals can identify people with 95 percent accuracy

Mass surveillance

Published 21 August 2025
– By Editorial Staff
2 minute read

Italian researchers have developed a technique that can track and identify individuals by analyzing how wifi signals reflect off human bodies. The method works even when people change clothes and can be used for surveillance.

Researchers at La Sapienza University in Rome have developed a new method for identifying and tracking people using wifi signals. The technique, which the researchers call “WhoFi”, can recognize people with an accuracy rate of up to 95 percent, reports Sweclockers.

The method is based on the fact that wifi signals reflect and refract in different ways when they hit human bodies. By analyzing these reflection patterns using machine learning and artificial neural networks, researchers can create unique “fingerprints” for each individual.

Works despite clothing changes

Experiments show that these digital fingerprints are stable enough to identify people even when they change clothes or carry backpacks. The average recognition rate is 88 percent, which researchers say is comparable to other automatic identification methods.

The research results were published in mid-July and describe how the technology could be used in surveillance contexts. According to the researchers, WhoFi can solve the problem of re-identifying people who were first observed via a surveillance camera in one location and then need to be found in footage from cameras in other locations.

Can be used for surveillance

The technology opens up new possibilities in security surveillance, but simultaneously raises questions about privacy and personal security. The fact that wifi networks, which are ubiquitous in today’s society, can be used to track people without their knowledge represents a new dimension of digital surveillance.

The researchers present their discovery as a breakthrough in the field of automatic person identification, but do not address the ethical implications that the technology may have for individuals’ privacy.

Facebook’s insidious surveillance: VPN app spied on users

Mass surveillance

Published 9 August 2025
– By Editorial Staff
2 minute read

In 2013, Facebook acquired the Israeli company Onavo for approximately 120 million dollars. Onavo was marketed as a VPN app that would protect users’ data, reduce mobile usage, and secure online activities. Over 33 million people downloaded the app believing it would strengthen their privacy.

In reality, Onavo gave Facebook complete insight into users’ phones – including which apps were used, how long they were open, and which websites were visited.

According to court documents and regulatory authorities, Facebook used this data to identify trends and map potential competitors. By analyzing user patterns in apps like Houseparty, YouTube, Amazon, and Snapchat, the company could determine which platforms posed a threat to its market dominance.

When Snapchat’s popularity began to explode in 2016, Facebook encountered a problem: encrypted traffic prevented insight into users’ behavior, reports Business Today. To circumvent this, Facebook launched an internal operation called “Project Ghostbusters”.

Facebook engineers developed specially adapted code based on Onavo’s infrastructure. The app installed a so-called root certificate on users’ phones – consent was hidden in legal documentation – which enabled Facebook to create fake certificates that mimicked Snapchat’s servers.

This made it possible to decrypt and analyze Snapchat’s traffic internally. The purpose was to use the information as a basis for strategic decisions, product development, or potential acquisitions.

Snapchat said no – Facebook copied instead

Based on data from Onavo, Facebook offered to buy Snapchat for 3 billion dollars. When Snapchat CEO Evan Spiegel declined, Facebook responded by launching Instagram Stories – a direct copy of Snapchat’s most popular feature. This became a decisive move in the competition between the two platforms.

In 2018, Apple removed Onavo from the App Store, citing that the app violated the company’s data protection rules. Facebook responded by launching a new app: Facebook Research, internally called Project Atlas, which offered similar surveillance functions. This time, the company paid users – some as young as 13 – up to 20 dollars per month to install the app.

When Apple discovered this, the company acted forcefully and revoked Facebook’s enterprise development certificates. This meant that all internal iOS apps were temporarily stopped – one of Apple’s most far-reaching measures ever.

In 2020, the Australian Competition and Consumer Commission (ACCC) sued Facebook, now called Meta, for misleading users with false promises about privacy. In 2023, Meta’s subsidiaries were fined a total of 20 million Australian dollars (approximately €11 million) for misleading behavior.

Why it still matters

Business Insider emphasizes that the Onavo story is not just about a misleading app. It also illustrates how one of the world’s most powerful tech companies built a surveillance system disguised as a privacy tool.

The fact that Facebook used the data to map competitors, copy features, and maintain control over the social media market – and also targeted underage users for data collection – raises additional ethical questions.

“Even a decade later, Onavo remains a case study in how ‘data is power’ and how far companies are willing to go to get it”, the publication concludes.

Amazon acquires AI company that records everything you say

Mass surveillance

Published 27 July 2025
– By Editorial Staff
3 minute read

Tech giant Amazon has acquired the Swedish AI company Bee, which develops wearable devices that continuously record users’ conversations. The deal signals Amazon’s ambitions to expand within AI-driven hardware beyond its voice-controlled home assistants.

The acquisition was confirmed by Bee founder Maria de Lourdes Zollo in a LinkedIn post, while Amazon told tech site TechCrunch that the deal has not yet been completed. Bee employees have been offered positions within Amazon.

AI wristband that listens constantly

Bee, which raised €6.4 million in venture capital last year, manufactures both a standalone wristband similar to Fitbit and an Apple Watch app. The product costs €46 (approximately $50) plus a monthly subscription of €17 ($18).

The device records everything it hears – unless the user manually turns it off – with the goal of listening to conversations to create reminders and to-do lists. According to the company’s website, they want “everyone to have access to a personal, ambient intelligence that feels less like a tool and more like a trusted companion.”

Bee has previously expressed plans to create a “cloud phone” that mirrors the user’s phone and gives the device access to accounts and notifications, which would enable reminders about events or sending messages.

Competitors struggle in the market

Other companies like Rabbit and Humane AI have tried to create similar AI-driven wearable devices but so far without major success. However, Bee’s device is significantly more affordable than competitors’ – the Humane AI Pin cost €458 – making it more accessible to curious consumers who don’t want to make a large financial investment.

The acquisition marks Amazon’s interest in wearable AI devices, a different direction from the company’s voice-controlled home assistants like Echo speakers. Meanwhile, ChatGPT creator OpenAI is working on its own AI hardware, while Meta is integrating its AI into smart glasses and Apple is rumored to be working on the same thing.

Privacy concerns remain

Products that continuously record the environment carry significant security and privacy risks. Different companies have varying policies for how voice recordings are processed, stored, and used for AI training.

In its current privacy policy, Bee says users can delete their data at any time and that audio recordings are not saved, stored, or used for AI training. However, the app does store data that the AI learns about the user, which is necessary for the assistant function.

Bee has previously indicated plans to only record voices from people who have verbally given consent. The company is also working on a feature that lets users define boundaries – both based on topic and location – that automatically pause the device’s learning. They also plan to build AI processing directly into the device, which generally involves fewer privacy risks than cloud-based data processing.

However, it’s unclear whether these policies will change when Bee is integrated into Amazon. Amazon has previously had mixed results when it comes to handling user data from customers’ devices.

The company has shared video clips with law enforcement from people’s Ring security cameras without the owner’s consent or court order. Ring also reached a settlement in 2023 with the Federal Trade Commission after allegations that employees and contractors had broad and unrestricted access to customers’ video recordings.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.