Tuesday, February 18, 2025

Polaris of Enlightenment

Ad:

Why I’m a techno-optimist

Reclaiming privacy in a world that wants us to give up.

Published 15 January 2025
– By Naomi Brockwell

It feels like every device in our lives is spying on us. Vacuum cleaners send photos and audio from our bedrooms to China. Televisions take screenshots of what we’re watching every few seconds and share that data with third parties. Social media algorithms analyze our every click and scroll. And governments leverage these tools to watch us more closely than ever before.

It’s easy to feel pessimistic—even hopeless—about the future of privacy in a world so intertwined with technology. If you only watch the first half of our videos, you might think we hate tech.

“Tech is spying on us”. “Tech is tracking our location”. “Tech is allowing governments and corporations to overreach into our lives”.

But actually, I’m a techno-optimist.

If you watch the second half of our videos, you’ll hear us say things like, “This is the tech that will protect us”. “Here’s the tech that empowers us”. “Here’s how to use technology to reclaim our digital freedoms”.

I recently put out a video exploring techno-optimism, and I was shocked by the responses. So many people were quick to throw in the towel. Comments like: “I don’t share your optimism—privacy is dead”. “Don’t even try, it’s pointless”. Another privacy advocate who makes video content, The Hated One, noticed this trend on his videos too. There’s been an uptick in people telling others to give up on privacy altogether.

Honestly, it feels like a psyop. Who benefits from us giving up? The answer is obvious: only the people surveilling us. Maybe the psyop has been so effective it’s taken on a life of its own. Many people are now willingly complicit, fueling the narrative and spreading defeatism. This attitude is toxic, and it has to stop. If you’ve already given up, we don’t stand a chance. The privacy battle is ultimately about human rights and freedom. Giving up isn’t an option.

But more importantly, the idea that privacy is hopeless couldn’t be further from the truth. We have every reason to feel energized and excited. For the first time, we have both the technology and the cultural momentum to reclaim our privacy. The solution to surveillance isn’t throwing out our devices—it’s embracing the incredible privacy tech already available. The tools we need are here. We need to use them, build more, and spread the word. We need to lean into this fight.

I’m a techno-optimist because I believe we have the power to create a better future. In this newsletter, I’ll show you privacy tools you can already start using today, and highlight groundbreaking advancements in our near future.

Tech is neutral—it’s how we use it that matters

Many people have been tricked into thinking that tech itself is the problem. I see it in the comments on our videos. Whenever we share privacy solutions, someone always says, “If you want privacy, you have to throw out your digital devices”.

But that’s not true. You don’t have to throw out your devices to reclaim your privacy. The idea that technology and privacy can’t coexist benefits the very corporations and governments surveilling us. It keeps us from even trying to protect ourselves.

The truth is, technology is neutral. It can be used for surveillance, but it can also be used for privacy. For decades, it’s been hijacked primarily for surveillance. But now we have cutting-edge tools to fight back. We have encryption technology that empowers us to reclaim our digital freedoms.

How privacy tech is empowering people worldwide

Privacy tech is already changing lives all over the world. Here are a few powerful examples:

  • Iran: During widespread protests against oppressive laws, the government implemented internet shutdowns and banned platforms like Signal and VPNs. Signal stepped up, providing instructions for setting up proxy servers. This allowed protestors to coordinate activities and share uncensored information despite the repression. These tools helped individuals reclaim freedom themselves without needing permission first. Knowing that the ability to stay connected with the outside world remains in our hands is incredibly empowering.
  • Mexico: Journalists face extreme danger from both the government and cartels. There’s an entire Wiki page dedicated to journalists who have been killed in Mexico for exposing corruption and violence. Privacy tools like encrypted messaging and private data storage help protect those doing important work—like investigative journalism—and their sources from harm.
  • China: The “Great Firewall” blocks platforms like Google, Instagram, and Twitter. Citizens rely on tools like VPNs, Tor, and encrypted apps to bypass censorship and stay informed. Privacy tech has become a vital form of resistance and hope for millions.

All over the world, people are using privacy tech to reclaim freedom and resist oppression.

Privacy tools you can start using today

Here are some tools you can incorporate into your life:

  • Messaging: Use end-to-end encrypted apps to ensure only you and the recipient can read your messages.
  • Browsers: Privacy-focused browsers block tracking pixels, scripts, and bounce tracking to protect you online.
  • Search Engines: Switch to alternatives that don’t log or track your searches.
  • Email: Try encrypted email services to keep your communications private.
  • Calendars: Use privacy-respecting calendars that offer end-to-end encryption.
  • Media: Explore apps that let you consume content without being tracked, or decentralized platforms that avoid gatekeeping.
  • VPNs and Tor: Hide your IP address and anonymize your activities with these essential tools.

We give examples of each in our latest video and have dedicated guides exploring each topic so you can decide which option is best for you.

The future of privacy tech

The future of privacy tech is even more exciting. Here’s what’s on the horizon:

  • Homomorphic Encryption: This allows data to be processed without ever being exposed. It could transform fields like healthcare and finance by enabling services to generate insights without accessing private data.
  • Decentralized Identity: These systems let individuals store and manage their credentials without relying on centralized databases, reducing risks of hacking and misuse. They also give users more granular control over what information they share.
  • Zero-Knowledge Proofs: These cryptographic methods let you prove something is true—like your age or identity—without sharing the underlying data.

The rise of privacy culture

It’s not just technology that’s advancing—our culture around privacy is shifting. For years, surveillance was seen as inevitable. But high-profile breaches, government overreach, and whistleblowers have opened the public’s eyes. People are voting with their wallets, choosing privacy-respecting services, and demanding accountability.

We’ve seen this firsthand. For example, our video series about car privacy has been seen by millions of people who are now waking up to the invasive reality of modern vehicles. Imagine if these millions started asking car dealerships tough questions about privacy policies before making a purchase. That’s how we shift the needle.

The future is bright, and in our hands

So yes, I’m a techno-optimist.

We’re far from powerless. For the first time, we have both the technology and the cultural momentum to take back our privacy. But we’ll only succeed if we stop demonizing technology and start harnessing the privacy tech at our disposal to break free from surveillance.

At the end of the day, technology is just a tool. It’s up to us to decide how to use it. Let’s choose a future where privacy thrives because of innovation—not in spite of it.

Thanks to the most incredible year we’ve seen at NBTV, more people than ever are joining the fight for privacy, and we’re all shifting culture. Next year is going to be even better.

Here’s to an incredible 2025. Let’s make it count!

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

US and UK back away from international AI declaration

The future of AI

Published 15 February 2025
– By Editorial Staff
US Vice President JD Vance stresses that “pro-growth AI policies” should take priority over security.

Sweden and 60 other countries have signed an AI declaration for inclusive, sustainable and open AI. However, the United States and the United Kingdom have chosen to opt out a decision that has provoked strong reactions.

The AI Declaration was developed in conjunction with the International AI Summit in Paris earlier this week, and its aim is to promote inclusive and sustainable AI in line with the Paris Agreement. It also emphasizes the importance of an “ethical” approach where technology should be “transparent”, “safe” and “trustworthy”.

The declaration also notes AI’s energy use, something not previously discussed. Experts have previously warned that in the future AI could consume as much energy as smaller countries.

Countries such as China, India and Mexico have signed the agreement. Finland, Denmark, Sweden and Norway have also signed. The United States and the United Kingdom are two of the countries that have chosen not to sign the agreement, reports the British state broadcaster BBC.

“Global governance”

The UK government justifies its decision with concerns about national security and “global governance”. US Vice President JD Vance has also previously said that too much regulation of AI could “kill a transformative industry just as it’s taking off”. At the meeting, Vance stressed that AI was “an opportunity that the Trump administration will not squander” and said that “pro-growth AI policies” should be prioritized over security.

French President Emmanuel Macron, for his part, defended the need for further regulation.

AI and speech therapy to help police identify voices

Published 14 February 2025
– By Editorial Staff

Researchers at Lund University are developing a forensic speech comparison using speech therapy, AI, mathematics and machine learning. The method will help police analyze audio recordings in criminal investigations.

Like fingerprints and DNA, the voice carries unique characteristics that can be linked to individuals. Speech and voice are influenced by several factors, such as the size of the vocal cords, the shape of the oral cavity, language use and breathing. While most people can perceive the gender, age or mood of a speaker, it takes specialist knowledge to objectively analyze the unique patterns of the voice an area in which speech therapists are experts.

The police turned to Lund University for help analyzing audio recordings in an investigation. The request led to the development of forensic speech comparison as a method of evidence gathering.

The police often handle audio recordings where the speaker is known, but also recordings where the purpose is to confirm or exclude a suspect.

– What we do at the moment is to have three assessors, speech therapists, analyze the speech, voice and language in the recordings in order to compare them. We listen for several factors, such as how the person in question produces their voice, articulates, seems to move their tongue and lips, says Susanna Whitling, a speech therapist and researcher at Lund University, in a press release.

Both larger datasets and cutting-edge analysis

The number of requests from the police has increased, making it difficult for analysts to keep up with all the recordings. To handle larger data sets, researchers have developed AI-based methods that can identify relevant audio files, which are then analyzed by experts.

– By combining traditional speech therapy perceptual assessment of speech voice and language with machine learning, we want to make it possible to both scan large amounts of data and offer cutting-edge analysis. Based on the hits that the AI then extracts, experts can make a professional assessment, explains Whitling.

The researchers are also collaborating with Andreas Jakobsson, a professor of mathematical statistics, to develop specialized software. The vision is to have an accurate and reliable speech comparison.

– We speech therapists can do perceptual assessment and examine the probability that two recordings contain the same person’s speech, voice and language. When adding the development of specialized software for so-called acoustic analysis such as voice frequency, intensity and temporal variations, we collaborate with experts in signal processing and machine learning.

World leaders gather in Paris for AI summit

Published 11 February 2025
– By Editorial Staff
Ulf Kristersson and around 100 other heads of state and government are currently at the AI Summit in Paris.

The AI Action Summit is currently taking place in Paris, where world leaders are gathering to discuss global governance of artificial intelligence.

The stated aim of the summit, chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, is to create a common path forward for the development of AI and lay the foundations for global AI governance.

During the summit, five main areas will be discussed:

• AI in the public interest
• Future impact of AI on the labor market
• Promoting innovation and culture
• Trust in AI
• Global AI governance

The UN Secretary-General, António Guterres, is attending the meeting along with leaders from nearly 100 countries, representatives from international organizations, researchers and civil society representatives.

He sees the establishment of global AI governance as a top priority and believes the technology could pose an “existential concern” for humanity if not regulated in a “responsible manner”.

AI must remain a tool at the service of humanity, and not a source of inequalities and unbridled risks”, writes the UN Information Center UNRIC in a press release.

The discussions should therefore shape an AI that is sustainable, beneficial and inclusive, with particular attention paid to the risks of abuse and the protection of individual rights”, it continues.

EU must invest in AI

Swedish Prime Minister Ulf Kristersson is leading the Swedish delegation, and is also attending a meeting of EU leaders to discuss European competitiveness and the future role of AI in it.

The summit organizers state that Europe “can and must significantly strengthen its positioning on AI and accelerate investments in this field, so that we can be at the forefront on the matter”.

Regarding the regulation and control of AI, the view is that “one single governance initiative is not the answer”. Instead, the focus should be on “existing initiatives, like the Global Partnership on Artificial Intelligence (GPAI), need to be coordinated to build a global, multi-stakeholder consensus around an inclusive and effective governance system for AI”, it says.

Google abandons promise not to use AI for weapons

Published 8 February 2025
– By Editorial Staff
The tech giant claims that in its AI development it implements social responsibility and generally accepted principles of international law and human rights.

Google has removed the part of its AI policy that previously prohibited the development and deployment of AI for weapons or surveillance.

When Google first published its AI policy in 2018, it included a section called “applications we won’t pursue”, in which the company pledged not to develop or deploy AI for weapons or surveillance.

Now it has removed that section and replaced it with another, Bloomberg reports. Records indicate that the previous text was still there as recently as last week.

Instead, the section has been replaced by “Responsible development and deployment”, where Google states that the company will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights”.

In connection with the changes, Google refers to a blog post in which the company writes that the policy change is necessary, as AI is now used for more general purposes.

Thousands of employees protested

In 2018, Google signed a controversial government contract called Project Maven, which effectively meant that the company would provide AI software to the Department of Defense to analyze drone images. Thousands of Google employees signed a protest against the contract and dozens chose to leave.

It was in the context of that contract that Google published its AI guidelines, in which it promised not to use AI as a weapon. The tech giant’s CEO, Sundar Pichai, reportedly told staff that he hoped the guidelines would stand the “test of time”.

In 2021, the company signed a new military contract to provide cloud services to the US military. In the same year, it also signed a contract with the Israeli military, called Project Nimbus, which also provides cloud services for the country. In January this year, it also emerged that Google employees were working with Israel’s Ministry of Defense to expand the government’s use of AI tools, as reported by The Washington Post.