Wednesday, October 8, 2025

Polaris of Enlightenment

8 Ways to Fight for Privacy Today

Mass surveillance

How you can make an impact.

Published 19 December 2024
– By Naomi Brockwell
We do not yield to the mass surveillance machine.
5 minute read

Last week, we published the Priv/Acc Manifesto, and I was deeply moved by the outpouring of responses from people eager to take action. So many of you reached out, asking, “How can I help?”

The threat to privacy is obvious—relentless surveillance from governments, corporations, and bad actors alike. Countless bills trying to ban end-to-end encryption and mandate back doors. A sea of complacency from people who have been tricked into thinking privacy is about hiding, instead of about consent.

But the path to meaningful action isn’t always clear.

The good news is that everyone, regardless of background, has a role to play in safeguarding privacy. Whether you’re a coder building better tools, an educator raising awareness, an advocate pushing for change, or simply someone who values personal freedom, there are practical steps you can take to make a difference.

In this newsletter, I want to share some of the ways you can contribute to this critical fight.

#1 Lead by example

The easiest way to contribute is by making deliberate choices about the products and services you use. Switching to privacy-respecting tools not only protects your data, but also sends a powerful market signal that privacy matters. When you choose privacy-focused companies, you help them thrive, fostering the development of even better tools. On the flip side, continuing to use platforms that harvest our data undermines privacy-focused alternatives, pushing them out of the market.

Here are a handful of my favorite tools, but our channel features hundreds of videos showcasing great alternatives you can explore:

  • Messaging: Signal
  • Web Browsing: Brave Browser
  • VPNs: Mullvad VPN
  • Email: ProtonMail and Tutanota
  • Productivity: CryptPad and LibreOffice

#2 Push Back Against Cultural Norms

The phrase “I have nothing to hide” has become a lazy justification for dismissing privacy. It’s time to reframe the conversation. Privacy isn’t about secrecy – it’s about consent. It’s about having the right to choose who gets access to our data and rejecting the idea that valuing privacy is something to be ashamed of.

Privacy protects whistleblowers, activists, and everyday individuals from surveillance and coercion. When someone parrots “nothing to hide,” remind them that privacy safeguards freedom, creativity, and autonomy. Changing this mindset is essential to making privacy a societal priority.

#3 User Manuals and Educational Awareness

You don’t need to be technical to make a huge impact. Writing clear, accessible guides for privacy tools is one of the most valuable ways to help. Blogs with beginner-friendly tutorials or personal experiences using privacy tools contribute to a growing reservoir of educational material for the community. Translating tutorials into other languages can expand their reach even further.

Even super simple tutorials—like explaining that Gmail can read your emails—can be eye-opening for many people. Education is a powerful way to build awareness, and your efforts might help someone take their first step toward reclaiming their privacy.

#4 Contribute to Open-Source Projects

For those with technical expertise, contributing to open-source privacy projects is one of the most effective ways to support the cause. Free and Open Source Software (FOSS) tools like TorGrapheneOS, and VeraCrypt are essential for people worldwide, but these projects are often critically underfunded and under-resourced.

Developers can help by building features or fixing bugs, while researchers can perform security audits to identify vulnerabilities. Remember the Heartbleed vulnerability? It was a major flaw in SSL, a cornerstone of internet security, that went undetected for years—illustrating the need for more eyes on open-source projects. Even small contributions, like reviewing code, can make a huge difference.

#5 Test Privacy Tools and Provide Feedback

For privacy tools to succeed, they need to be user-friendly and accessible to everyone—not just tech enthusiasts. By testing privacy platforms and sharing constructive feedback, you can help developers improve default settings and refine the overall user experience (UX). These small adjustments can make tools more intuitive, significantly boosting adoption among non-technical users.

Even if you’re not a coder, your contributions—like testing tools, reporting bugs, or improving documentation—are invaluable to open-source projects. Developers rely on user input to ensure their tools work for everyone, making your efforts critical to advancing privacy.

#6 Financial Support

Financial support is vital for building a robust ecosystem of privacy tools. Many open-source projects rely on donations to survive, and businesses building privacy tools need customers to remain sustainable. FOSS ensures that privacy tools are accessible to everyone, but if you can afford to donate or pay for premium versions, your support keeps these tools available for those who need them most.

#7 Drive Change From Within

If you work for a tech company, advocate for privacy-by-design principles—embedding privacy into products from the ground up. Push for policies like data minimization and transparency, or encourage your organization to invest in privacy research. Cutting-edge technologies like zero-knowledge proofs and homomorphic encryption are redefining what’s possible in privacy-preserving data analysis. Supporting innovation in these areas can have a profound impact on the future of privacy.

#8 Engage in Policy Advocacy

Governments frequently pass laws regulating technology without fully understanding their implications. Your voice can make a difference by shaping these policies to prevent harmful consequences. Push back against attempts to ban privacy tools or mandate backdoors, ensuring that the most vulnerable in society always have a way to protect themselves.

Supporting organizations like the EFF or other advocacy groups is another great way to get involved. These groups lobby for digital rights, educate the public, and fight back against policies that fuel the surveillance state. Together, we can help ensure that privacy remains a fundamental right.

The Power of Community

Privacy advocacy is about more than safeguarding our own information—it’s about defending the fundamental rights that underpin a free and just society. It ensures that those on the front lines—whistleblowers, activists, journalists, and others fighting for change—are equipped with the protection they need to carry out their vital work.

Every time you choose a privacy-respecting tool, educate someone about the importance of privacy, or contribute to an open-source project, you’re strengthening the movement.

Privacy isn’t about having something to hide—it’s about having the freedom to live, think, and act without fear or surveillance. It’s the foundation of creativity, dissent, and progress. Together, we can protect this essential right and ensure a future where privacy empowers us all.

Thanks for being part of this movement, everyone. This week I’m truly thankful and grateful to every one of you.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Telenor faces lawsuit over human rights abuses in Myanmar

Mass surveillance

Published yesterday 11:00
– By Editorial Staff
Telenor's information chief calls the demand a "PR stunt" and argues that the matter has already been handled by police and the judicial system.
3 minute read

Over a thousand people may have been persecuted, tortured, arrested or killed when Norwegian telecommunications company Telenor handed over sensitive customer data to the military junta in Myanmar. Now victims and relatives are threatening to sue and demanding millions in damages.

On Monday, Telenor’s management received a notice of lawsuit where the compensation claim is motivated by the telecom company illegally sharing sensitive personal data with Myanmar’s military junta.

“We ask for a response on whether the basis for the claim is disputed as soon as possible, but no later than within two weeks”, the letter stated.

Behind the claim stands the Dutch organization Centre for Research on Multinational Corporations (Somo) together with several Myanmar civil society organizations.

After the military coup in February 2021, the junta forced telecom operators like Telenor to hand over sensitive information about their customers. The information was then used to identify, track and arrest regime critics and activists.

Politician executed

Among those affected is a prominent politician and Telenor customer, and after the company handed over the data, the man was arrested, sentenced to death and executed in prison.

— We know that the potential group of victims is more than 1,000 people, says Joseph Wilde-Ramsing, director and lead negotiator at Somo to Norwegian business newspaper Dagens Næringsliv.

He emphasizes that some of the victims are dead and executed, while several are arrested.

— We are in contact with their family members and demand financial compensation from Telenor for what they have been subjected to.

Claim worth millions

Lawyer Jan Magne Langseth, partner at Norwegian law firm Simonsen Vogt Wiig, represents Somo in the case. He states that the claim will be extensive.

— We have not yet set an exact figure, but there is little doubt it will amount to several hundred million kroner, he says.

Both individuals and organizations working for the democracy movement in Myanmar are demanding compensation.

— We have the number lists that were handed over to the junta, but we don’t have all the names of the subscribers yet, says Langseth.

The notice establishes that Telenor systematically handed over personal data to the military junta, well aware that this would lead to human rights violations – including persecution, arbitrary arrests and elimination of opponents.

“This can be documented with extensive evidence”, the document states.

Telenor: “No good choices”

Telenor’s communications director David Fidjeland dismisses the matter and claims that the issue has already been resolved.

“The tragic developments in Myanmar have been the subject of several investigations within the police and judiciary without leading anywhere. Telenor Myanmar found itself in a terrible and tragic situation and unfortunately had no good choices”, he writes in an email and continues:

“That journalists from Bangkok and Kuala Lumpur to Marienlyst [Telenor’s headquarters in Norway] received this notice long before we ourselves received it unfortunately says something about where Somo has its focus. This unfortunately seems more like a PR stunt in a tragic matter than a serious communication”.

Sold operations in 2022

Telenor received a mobile license in Myanmar in 2014. In a short time, the company became a major mobile operator with over 18 million customers in the country. After the military coup in February 2021, when the previous government was overthrown, Telenor chose to sell its mobile operations in Myanmar to Lebanese M1 Group – including customer data. The sale was completed in March 2022.

According to local media, M1 Group’s local partner has close ties to the military junta.

Lawyer Langseth addresses the question of whether a refusal to hand over data would have affected local employees.

— The employees at Telenor Myanmar did not need to be involved. It could have been controlled from Norway or other countries in the group. Witnesses have told us that there was internal resistance among several of the key local employees at Telenor Myanmar against handing over data to the junta, he says.

Microsoft stops Israel’s use of technology for mass surveillance of Palestinians

The genocide in Gaza

Published 27 September 2025
– By Editorial Staff
Microsoft's research and development division in Matam Business Park in Haifa, Israel.
5 minute read

The tech giant has shut down the Israeli military’s access to cloud services and AI tools following revelations about a secret spy project that collected millions of phone calls from Palestinian civilians.

Microsoft has shut down the Israeli military’s access to technology that was used to power an extensive surveillance system that collected millions of Palestinian civilian phone calls daily from Gaza and the West Bank, The Guardian can reveal.

Microsoft informed Israeli officials last week that Unit 8200, the military’s elite intelligence agency, had violated the company’s terms of service by storing the enormous amount of surveillance data on its Azure cloud platform, according to sources with insight into the situation.

The decision to cut off Unit 8200’s ability to use parts of the technology is a direct result of an investigation that The Guardian published last month. It revealed how Azure was used to store and process the enormous amount of Palestinian communications in a mass surveillance program.

Secret project after summit meeting

In a joint investigation with the Israeli-Palestinian publication +972 Magazine and the Hebrew-language newspaper Local Call, The Guardian revealed how Microsoft and Unit 8200 had worked together on a plan to move large volumes of sensitive intelligence material to Azure.

The project began after a 2021 meeting between Microsoft CEO Satya Nadella and the unit’s then-commander Yossi Sariel.

In response to the investigation, Microsoft ordered an urgent external review to examine its relationship with Unit 8200. The initial results have now led to the company cutting off the unit’s access to certain of its cloud storage and AI services.

Equipped with Azure’s virtually unlimited storage capacity and computing power, Unit 8200 had built an indiscriminate new system that allowed its intelligence officers to collect, replay, and analyze the content of mobile calls from an entire population.

The project was so extensive that, according to sources from Unit 8200 – which is equivalent to the US National Security Agency – an internal motto emerged that captured its scope and ambition: “One million calls per hour.”

According to several sources, the enormous archive of intercepted calls – amounting to as much as 8,000 terabytes of data – was held in a Microsoft data center in the Netherlands. Within days of The Guardian publishing the investigation, Unit 8200 appears to have quickly moved surveillance data out of the country.

Data moved to Amazon

According to sources with knowledge of the enormous data transfer out of the EU country, it occurred in early August. Intelligence sources said that Unit 8200 planned to transfer data to Amazon Web Services cloud platform. Neither the Israel Defense Forces (IDF) nor Amazon responded to a request for comment.

Microsoft’s extraordinary decision to terminate the spy agency’s access to key technology was taken amid pressure from employees and investors over its work for the Israeli military and the role its technology has played in the nearly two-year-long offensive in Gaza.

A UN commission of inquiry recently concluded that Israel had committed genocide in Gaza, an allegation denied by Israel but supported by many experts in international law.

The Guardian’s joint investigation led to protests at Microsoft’s US headquarters and one of its European data centers, as well as demands from a worker-led campaign group, No Azure for Apartheid, to end all ties to the Israeli military.

Clear message from Microsoft

On Thursday, Microsoft Vice Chairman and President Brad Smith informed staff about the decision. In an email that The Guardian has seen, he said the company had “terminated and deactivated a set of services to a unit within Israel’s Ministry of Defense,” including cloud storage and AI services.

Smith wrote: “We do not provide technology to facilitate mass surveillance of civilians. We have applied this principle in all countries around the world, and we have insisted on it repeatedly for more than two decades.”

The decision brings an abrupt end to a three-year period during which the spy agency operated its surveillance program using Microsoft’s technology.

Unit 8200 used its own extensive surveillance capabilities to intercept and collect the calls. The spy agency then used a customized and segregated area within the Azure platform, enabling data to be retained for longer periods and analyzed with AI-driven techniques.

Used for bombing targets in Gaza

Although the initial focus of the surveillance system was the West Bank, where an estimated 3 million Palestinians live under Israeli military occupation, intelligence sources said the cloud-based storage platform had been used in the Gaza offensive to facilitate the preparation of deadly airstrikes.

The revelations highlighted how Israel has relied on services and infrastructure from major US tech companies to support its bombardment of Gaza, which has killed more than 65,000 Palestinians, mostly civilians, and created a deep humanitarian crisis and famine catastrophe.

According to a document seen by The Guardian, a senior Microsoft executive told Israel’s Ministry of Defense last week:

While our review is ongoing, we have at this point identified evidence supporting parts of The Guardian’s reporting.

The executive told Israeli officials that Microsoft “is not in the business of facilitating mass surveillance of civilians” and informed them that it would “deactivate” access to services supporting Unit 8200’s surveillance project and shut down its use of certain AI products.

First time since the war began

The termination is the first known case of a US tech company withdrawing services provided to the Israeli military since the beginning of its war in Gaza.

The decision has not affected Microsoft’s broader commercial relationship with the IDF, which is a long-standing client and will retain access to other services. The termination will raise questions within Israel about the policy of keeping sensitive military data in a third-party cloud operated abroad.

Last month’s revelations about Unit 8200’s use of Microsoft technology followed an earlier investigation by The Guardian and its partners about the broader relationship between the company and the Israeli military.

That story, published in January and based on leaked files, showed how the IDF’s reliance on Azure and its AI systems increased dramatically in the most intensive phase of its Gaza campaign.

Following that report, Microsoft launched its first review of how the IDF uses its services. It said in May that it had “found no evidence to date” that the military had failed to comply with its terms of service, or used Azure and its AI technology “to target or harm people” in Gaza.

But The Guardian’s investigation with +972 and Local Call published in August, which revealed that the cloud-based surveillance project had been used to investigate and identify bombing targets in Gaza, led to the company reassessing its conclusions.

The revelations caused alarm among senior Microsoft executives and raised concerns that some of its Israel-based employees may not have been fully transparent about their knowledge of how Unit 8200 used Azure when questioned as part of the review.

The company said its executives, including Nadella, were not aware that Unit 8200 planned to use, or ultimately used, Azure to store the content of intercepted Palestinian calls.

Microsoft then launched its second and more targeted review, which was overseen by lawyers at the US firm Covington & Burling. In his note to staff, Smith said the investigation did not have access to any customer data but its findings were based on a review of internal Microsoft documents, emails and messages between personnel.

I want to note our appreciation for The Guardian’s reporting, Smith wrote, noting that it had illuminated “information we could not access given our customer confidentiality commitments.” He added: “Our review is ongoing.”

OpenAI monitors ChatGPT chats – can report users to police

Mass surveillance

Published 20 September 2025
– By Editorial Staff
What has been perceived as private AI conversations can now end up with police.
2 minute read

OpenAI has quietly begun monitoring users’ ChatGPT conversations and can report content to law enforcement authorities.

The revelation comes after incidents where AI chatbots have been linked to self-harm behavior, delusions, hospitalizations and suicide – what experts call “AI psychosis”.

In a blog post, the company acknowledges that they systematically scan users’ messages. When the system detects users planning to harm others, the conversations are directed to a review team that can suspend accounts and contact police.

“If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement”, writes OpenAI.

The new policy means in practice that millions of users have their conversations scanned and that what many perceived as private conversations with an AI are now subject to systematic surveillance where content can be forwarded to authorities.

Tech journalist Noor Al-Sibai at Futurism points out that OpenAI’s statement is “short and vague” and that the company does not specify exactly what types of conversations could lead to police reports.

“It remains unclear which exact types of chats could result in user conversations being flagged for human review, much less getting referred to police”, she writes.

Security problems ignored

Ironically, ChatGPT has proven vulnerable to “jailbreaks” where users have been able to trick the system into giving instructions for building neurotoxins or step-by-step guides for suicide. Instead of addressing these fundamental security flaws, OpenAI is now choosing extensive surveillance of users.

The surveillance stands in sharp contrast to the tech company’s actions in the lawsuit against the New York Times, where the company “steadfastly rejected” demands to hand over ChatGPT logs citing user privacy.

“It’s also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it’s monitoring user chats and potentially sharing them with the fuzz”, Al-Sibai notes.

May be forced to hand over chats

OpenAI CEO Sam Altman has recently acknowledged that ChatGPT does not offer the same confidentiality as conversations with real therapists or lawyers, and due to the lawsuit, the company may be forced to hand over user chats to various courts.

“OpenAI is stuck between a rock and a hard place”, writes Al-Sibai. The company is trying to handle the PR disaster from users who have suffered mental health crises, but since they “clearly having trouble controlling its own tech”, they fall back on “heavy-handed moderation that flies in the face of its own CEO’s promises”.

The tech company announces that they are “currently not” reporting self-harm cases to police, but the wording suggests that even this could change. The company has also not responded to requests to clarify what criteria are used for surveillance.

Wifi signals can identify people with 95 percent accuracy

Mass surveillance

Published 21 August 2025
– By Editorial Staff
2 minute read

Italian researchers have developed a technique that can track and identify individuals by analyzing how wifi signals reflect off human bodies. The method works even when people change clothes and can be used for surveillance.

Researchers at La Sapienza University in Rome have developed a new method for identifying and tracking people using wifi signals. The technique, which the researchers call “WhoFi”, can recognize people with an accuracy rate of up to 95 percent, reports Sweclockers.

The method is based on the fact that wifi signals reflect and refract in different ways when they hit human bodies. By analyzing these reflection patterns using machine learning and artificial neural networks, researchers can create unique “fingerprints” for each individual.

Works despite clothing changes

Experiments show that these digital fingerprints are stable enough to identify people even when they change clothes or carry backpacks. The average recognition rate is 88 percent, which researchers say is comparable to other automatic identification methods.

The research results were published in mid-July and describe how the technology could be used in surveillance contexts. According to the researchers, WhoFi can solve the problem of re-identifying people who were first observed via a surveillance camera in one location and then need to be found in footage from cameras in other locations.

Can be used for surveillance

The technology opens up new possibilities in security surveillance, but simultaneously raises questions about privacy and personal security. The fact that wifi networks, which are ubiquitous in today’s society, can be used to track people without their knowledge represents a new dimension of digital surveillance.

The researchers present their discovery as a breakthrough in the field of automatic person identification, but do not address the ethical implications that the technology may have for individuals’ privacy.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.