Saturday, September 20, 2025

Polaris of Enlightenment

The internet is a manipulation machine

Be careful you're not playing an avatar in someone else’s propaganda war.

Published today 8:08
– By Naomi Brockwell
8 minute read

We’re more polarized than ever. Conversations have turned into shouting matches. Opposing ideas feel like threats, not something to debate.

But here’s something many people don’t realize: privacy and surveillance have everything to do with it. Most people never connect those dots.

Why surveillance is the key to polarization

Surveillance is the engine that makes platform-driven polarization work.

Platforms have one overriding goal: to keep us online as long as possible. And they’ve learned that nothing hooks us like outrage. If they can rile us up, we’ll stay, scroll, and click.

Outrage drives engagement. Engagement drives profit. But when outrage becomes the currency of the system, polarization is the natural byproduct. The more the platforms know about us, the easier it is to feed us the content that will push our buttons, confirm our biases, and keep us in a cycle of anger. And that anger doesn’t just keep us scrolling, it also pushes us further apart.

These platforms are not neutral spaces, they are giant marketplaces where influence is bought and sold. Every scroll, every feed, every “recommended” post is shaped by algorithms built to maximize engagement and auction off your attention. And it’s not just companies pushing shoes or handbags. It’s political groups paying to shift your vote. It’s movements paying to make you hate certain people because you think they hate you. It’s hostile governments paying to fracture our society.

Because our lives are so transparent to the surveillance machine, we’re more susceptible to manipulation than ever. Polarization isn’t cultural drift. When surveillance becomes the operating system of the internet, polarization and manipulation are the natural consequences.

The internet is a manipulation machine

Few people are really aware of how much manipulation there is online. We all fancy ourselves to be independent thinkers. We like to think we make up our own mind about things. That we choose for ourselves which videos to watch next. That we discover interesting articles all on our own.

We want to believe we’re in control. But in a system where people are constantly paying to influence us, that independence is hard to defend. The truth is, our autonomy is far more fragile than we’d like to admit.

This influence creeps into our entire online experience.

Every time you load a web page, you’ll notice that the text appears first, alongside empty white boxes, and there’s a split second before those boxes are filled up. What’s going on in that split second is an auction, as part of what’s called a real-time bidding (RTB) system.

For example, in Google’s RTB system, what’s going on behind the scenes in that split second is Google is announcing to their list of Authorized Buyers, who are the bidders plugged into Google’s ad exchange:

“Hey, this person just opened up her webpage, here’s everything we know about her. She has red hair. She rants a lot about privacy. She likes cats. Here’s her device, location, browsing history, and this is her inferred mood. Who wants to bid to put an ad in front of her?”

These authorized buyers have milliseconds to decide whether to bid and how much.

This “firehose of data” is sprayed at potentially thousands of entities. And the number of data points included can be staggering. Google knows a LOT about you. Only one buyer wins the ad slot and pays, but potentially thousands will get access to that data.

Google doesn’t make their Authorized Buyers list public, but they do publish a list of Certified External Vendors list, which is a public-facing list of vendors like demand-side platforms, ad servers, analytics providers, etc. that Google has certified to interact with their ad systems. This CEV list is the closest proxy the public gets to knowing who is involved in this real-time bidding system.

And if you scroll through the names of some of these vendors, you won’t even find a Wikipedia page for many of them. A huge number have scrubbed themselves from the internet. It’s a mix of ad companies, data brokers, even government shell companies. And many of them you can bet are just sitting quietly in these auctions so they can scrape this data, to share or sell elsewhere, or use for other purposes. Regardless of what Google’s own Terms of Service say, once this data leaves Google’s hands, they have no control.

This real-time bidding system is just one behind-the-scenes mechanisms of the influence economy. But this machinery of influence is everywhere, not just when you load a webpage.

When you go to watch a video, there are thumbnails next to the video suggesting what you should watch next, and you click on one if it looks interesting. Those video thumbnails were not accidental.

When you scroll a social media timeline, the posts that populate are intentional. Everywhere you go, you’re seeing things that people have paid to put in front of you, hoping to nudge you one way or another. Even search results, which feel like neutral gateways to information, are arranged according to what someone else wants you to see.

This system of manipulation isn’t limited to simple commercial influence, where companies just want to get us to buy a new pair of shoes.

There are faceless entities paying to shape our thoughts, shift our behavior, and sway our votes. They work to bend our worldview, to manipulate our emotions, even to make us hate other people by convincing us those people hate us.

Where privacy comes in

This is where privacy comes into play.

The more a company or government knows about us, the easier it is to manipulate us.

  • If we allow every email to be scanned and analyzed, every message to be read, every like, scroll, and post to be fed into a profile about us…
  • If companies scrape every browser click, every book we read, every piece of music we listen to, every film we watch…
  • When faceless entities know everywhere we go, whom we meet, what we do, and then they trace who those people meet, where they go, and what they do, and our entire social graph is mapped…

In this current reality, the surveillance industrial complex knows us better than we know ourselves, and it becomes easy to figure out exactly what will make us click.

“Oh, Naomi is sad today. She’ll be more susceptible to this kind of messaging. Push it to her now.”

Profiles aren’t just about facts. They’re about state of mind. If the system can see that you’re tired, lonely, or angry, it knows exactly when to time the nudge.

Who are the players?

This isn’t just about platforms experimenting with outrage to keep us online. Entire government departments now study these manipulation strategies. When something goes viral, they try to trace where it started: “Was it seeded by a hostile nation, a domestic political shop, or a corporation laying the groundwork for its next rent-seeking scheme?”

Everyone with resources uses these tools. Governments, parties, corporations, activist networks. The mechanism is the same, and the targets are us.

The entire internet runs on a system where people are competing for our attention, and some of the agendas of those involved are downright nefarious.

These systems don’t just predict what we like and hate, they actively shape it, and we have to start realizing that sometimes division itself is the intended outcome.

Filter bubbles were only the beginning

For years, the filter bubble was the go-to explanation for polarization. Algorithms showed us more of what we already agreed with, so we became trapped in echo chambers. We assumed polarization was just the natural consequence of people living in separate informational worlds.

But that story is only half right, and dangerously incomplete.

The real problem isn’t just that we see different things.
It’s that we are being deliberately targeted.

Governments, corporations, and movements know so much about us that they can do more than keep us in bubbles. They can reach inside those bubbles to provoke us, push us, and agitate us.

Filter bubbles were about limiting information. Surveillance-driven targeting is about exploiting information. With enough data, platforms and their partners can predict what will outrage you, when you’re most vulnerable, and which message will make you react.

And that’s the crucial shift. Polarization today isn’t just a byproduct of passive algorithms. It’s the direct result of an influence machine that knows us better than we know ourselves, and uses that knowledge to bend us toward someone else’s agenda.

Fakes, fragments, and manufactured consensus

We live in a world of deepfakes.

We live in a world of soundbites taken out of context.

We live in an era where it’s easier than ever to generate AI fluff. If someone wants to make a point of view seem popular, they can instantly create thousands of websites, all parroting the same slightly tweaked narrative. When we go searching for information, it looks like everyone is in consensus.

Volume now looks like truth, and repetition now looks like proof. And both are cheap.

Remember your humanity

In this era of artificial interactions, manipulation, and engineered outrage, we can’t forget our humanity.

That person that you’re fighting with might not actually be a human, they might be a bot.

That story about that political candidate might have been taken completely out of context, and deliberately targeted at you to make you angry.

Online, we dehumanize each other. But we should instead remember how to talk. Ideas can be discussed without becoming triggers. They don’t have to send us spiraling after four hours of doomscrolling.

Fear is the mindkiller. When something online pushes you to react, pause. Ask whose agenda this serves. Ask what context you might be missing.

The path forward

We are more polarized than ever, largely because we’ve become so transparent to those who profit from using our emotions against us.

Privacy is our ally in this fight. The less companies and governments know about us, the harder it is for them to manipulate us. Privacy protects our autonomy in the digital age.

And we need to see each other as humans first, not as avatars in someone else’s propaganda war. The person you’re arguing with was probably targeted by a completely opposite campaign.

We’ll all be better off if we lift the veil on this manipulation, and remember that we are independent thinkers with the power to make up our own minds, instead of being led by those who want to control us.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

OpenAI monitors ChatGPT chats – can report users to police

Mass surveillance

Published today 11:21
– By Editorial Staff
What has been perceived as private AI conversations can now end up with police.
2 minute read

OpenAI has quietly begun monitoring users’ ChatGPT conversations and can report content to law enforcement authorities.

The revelation comes after incidents where AI chatbots have been linked to self-harm behavior, delusions, hospitalizations and suicide – what experts call “AI psychosis”.

In a blog post, the company acknowledges that they systematically scan users’ messages. When the system detects users planning to harm others, the conversations are directed to a review team that can suspend accounts and contact police.

“If human reviewers determine that a case involves an imminent threat of serious physical harm to others, we may refer it to law enforcement”, writes OpenAI.

The new policy means in practice that millions of users have their conversations scanned and that what many perceived as private conversations with an AI are now subject to systematic surveillance where content can be forwarded to authorities.

Tech journalist Noor Al-Sibai at Futurism points out that OpenAI’s statement is “short and vague” and that the company does not specify exactly what types of conversations could lead to police reports.

“It remains unclear which exact types of chats could result in user conversations being flagged for human review, much less getting referred to police”, she writes.

Security problems ignored

Ironically, ChatGPT has proven vulnerable to “jailbreaks” where users have been able to trick the system into giving instructions for building neurotoxins or step-by-step guides for suicide. Instead of addressing these fundamental security flaws, OpenAI is now choosing extensive surveillance of users.

The surveillance stands in sharp contrast to the tech company’s actions in the lawsuit against the New York Times, where the company “steadfastly rejected” demands to hand over ChatGPT logs citing user privacy.

“It’s also kind of bizarre that OpenAI even mentions privacy, given that it admitted in the same post that it’s monitoring user chats and potentially sharing them with the fuzz”, Al-Sibai notes.

May be forced to hand over chats

OpenAI CEO Sam Altman has recently acknowledged that ChatGPT does not offer the same confidentiality as conversations with real therapists or lawyers, and due to the lawsuit, the company may be forced to hand over user chats to various courts.

“OpenAI is stuck between a rock and a hard place”, writes Al-Sibai. The company is trying to handle the PR disaster from users who have suffered mental health crises, but since they “clearly having trouble controlling its own tech”, they fall back on “heavy-handed moderation that flies in the face of its own CEO’s promises”.

The tech company announces that they are “currently not” reporting self-harm cases to police, but the wording suggests that even this could change. The company has also not responded to requests to clarify what criteria are used for surveillance.

A Bell Labs for privacy

What Bell Labs taught us about orchestrating breakthroughs, and how we can use those lessons to push back against surveillance today.

Published 13 September 2025
– By Naomi Brockwell
9 minute read

I’ve been reading The Idea Factory by Jon Gertner, and it’s fascinating. It tells the story of Bell Labs, the research arm of AT&T, and a singular moment in history when a small community of scientists and engineers played a huge role in inventing much of the modern world. From the transistor to information theory, from lasers to satellites, a staggering number of breakthroughs can trace their origins from this one place.

The book asks: what made this possible?

It wasn’t luck. It was a deliberate design. Bell Labs proved that invention could be engineered: You can create the right environment to deliberately make breakthroughs more likely. With the right structure, culture, and incentives, it’s possible to give technological progress its best possible chance.

And this got me thinking: what’s the most effective way to move privacy and decentralized tech forward? Perhaps the internet itself taken on the role Bell Labs once played, and become the shared space where ideas collide, disciplines mix, and breakthroughs emerge? If so, how do we best harness this potential?

A factory for ideas

After World War II, Mervin Kelly, Bell Labs’ president, asked a radical question: could invention itself be systematized? Instead of waiting for breakthroughs, could he design an environment that produced them more reliably?

He thought the answer was yes, and reorganized Bell Labs accordingly. Metallurgists worked alongside chemists, physicists with mathematicians, engineers with theorists. Kelly believed the greatest advances happened at the intersections of fields.

There were practical reasons for cross-disciplinary teams too. When you put a theorist beside an experimentalist or engineer, hidden constraints surface early, vague ideas become testable designs, bad ideas die faster, and good ones escape notebooks and turn into working devices.

Bell Labs organized its work into a three-stage pipeline for innovation:

  1. Basic research: scientists exploring fundamental questions in physics, chemistry, and mathematics. This was the source of radical, sometimes “impractical” ideas that might not have an immediate use but expanded the frontier of knowledge.
  2. Applied research: engineers and theorists who asked which discoveries could actually be applied to communication technology. Their role was to translate abstract science into potential uses for AT&T’s vast network.
  3. Development and systems engineering: practical engineering teams who built the devices, refined the systems, and integrated them into the company’s infrastructure so they could work at scale in the real world.

This pipeline meant that raw science didn’t just stay theoretical. It became transistors in radios, satellites in orbit, and digital switching systems that powered the modern telephone network.

Bell Labs’ building architecture was designed to spark invention as well. At the Murray Hill campus, famously long corridors linked departments to trigger chance encounters. A physicist might eat lunch with a metallurgist. A chemist might bump into an engineer puzzling over a problem. And there was a cultural rule: if a colleague came to your door for help, you didn’t turn them away.

Causation is hard to prove, but the lab’s track record in the years that followed was remarkable:

  • The transistor (1947): John Bardeen, Walter Brattain, and William Shockley replaced bulky vacuum tubes and launched the electronics age.
  • Information theory (1948): Claude Shannon created the mathematics of communication, the foundation of everything from the internet to data encryption.
  • And much more: semiconductor and silicon device advances; laser theory and early lasers (including a 1960 continuous-wave gas laser); the first practical silicon solar cell (1954); major contributions to digital signal processing and digital switching; Telstar satellite communications (1962). The list goes on.

The Secret Sauce… it’s not what you think

Some people may argue that Bell Labs succeeded for other reasons. They point to government protection, a regulated market, defense contracts, and deep pockets. Those things were real, but they are not a sufficient explanation. Plenty of money is poured into research that goes nowhere. And protected monopolies often stagnate, because protection reduces the incentive to improve.

What Bell Labs’ resources did buy was proximity. Kelly’s goal was to gather great talent under one roof, and strategically try to increase the chances they would interact and work together. He built a serendipity machine.

The real lesson to take away from Bell Labs isn’t about money. It’s about collaboration and chance encounters.

By seating different disciplines side by side, they could connect, collaborate, and share insights directly. Building on one another’s ideas and sparking new ones led to a staggering array of advances at Bell Labs in the post-war decade.

Now in Kelly’s day, the best ways to give cross-pollination a real chance was to get people together in person, and that took a large amount of money from a behemoth corporation like AT&T.

If we wanted to manufacture the same kind of world-changing collaboration to push the privacy movement forward today, would we need AT&T-level resources?

Not necessarily. The internet can’t replicate everything Bell Labs offered, but it does mimic a lot of the value. Above all, it gives us the most powerful tools for connection the world has ever seen. And if we use those tools with intent, it’s possible to drive the same kind of serendipity and collaboration that once made Bell Labs extraordinary.

A decentralized Bell Labs

Kelly emphasized that casual, in-person encounters were irreplaceable.

A phone call didn’t suffice because it was usually scheduled, purposeful, and limited.

What he engineered was serendipity, like bumping into someone, overhearing a problem, and having an impromptu brainstorm.

Today, the internet in many ways mimics similar chance encounters. What once required hundreds of millions of dollars and government contracts can now be achieved with a laptop and an internet connection.

  1. Open work in public: GitHub issues, pull requests, and discussions can now be visible to anyone. A stranger can drop a comment, file a bug, or propose a fix. This is the digital version of overhearing a whiteboard session and joining in.
  2. Frictionless publishing: Research papers, blog posts, repos, and demos can go live in minutes and reach millions. People across disciplines can react the same day with critiques, code, or data.
  3. Shared problem hubs: Kaggle competitions, open benchmarks, and Gitcoin-style bounties concentrate diverse talent on the same challenge. Remote hackathons add the social, time-bound pressure that sparks rapid collaboration, like at Bell Labs where clusters of scientists would swarm the same puzzle, debate approaches in real time, and push each other toward breakthroughs. At Bell Labs, Kelly deliberately grouped many of the smartest people around the same hard problem to force progress.
  4. Topic subscriptions, not just people: Following tags, keywords, or RSS feeds brings in ‘weak-tie’ expertise from outside your circle. ‘Weak ties’ comes from social network theory: ‘strong ties’ are your close friends and colleagues, and you often share the same knowledge. ‘Weak ties’ are acquaintances, distant colleagues, or people in other fields, and they’re more likely to introduce new information or perspectives you don’t already have. So when you follow topics (like ‘post-quantum cryptography’ or ‘homomorphic encryption’) instead of just following individual people, you start seeing insights from strangers in different circles. That’s where fresh breakthroughs often come from — not the people closest to you, but the weak ties on the edges of your network.
  5. Remixes and forks: On places like GitHub, instead of just commenting on someone’s work, you can copy it, modify it, and publish your own version. That architecture encourages people to extend ideas. It’s like in a Bell Labs meeting where instead of only talking, someone picks up the chalk and adds to the equation on the board.
  6. Chance discovery: Digital town halls expose you to reposts, recommendations, and trending threads you might never have gone looking for. Maybe someone tags you in a post they think you’d find useful, or you have cultivated a “list”, where you follow a group of accounts that consistently have interesting thoughts. These small nudges can create a digital form of the ‘hallway collision’ Kelly tried to design into Bell Labs.
  7. Cross-linking and citation trails: Hyperlinks, related-paper tools, and citation networks help you move from one idea to another, revealing useful work you did not know to look for. It’s like walking past ten doors you didn’t know you needed to knock on.
  8. Lightweight face time: AMAs, livestream chats, and open office hours give people a simple way to drop in, ask questions, and get unstuck, and are the digital equivalent of popping by someone’s desk.

Now, anyone can tap into a global brain trust. A metallurgist in Berlin, a cryptographer in San Francisco, and a coder in Bangalore can share code, publish findings, and collaborate on the same project in real time. Open-source repositories let anyone contribute improvements. Mailing lists and forums connect obscure specialists instantly. Digital town squares recreate the collisions Kelly once designed into Murray Hill.

What once depended on geography and monopoly rents has been democratized. And we already have proof this model works. For example, Linux powers much of the internet today, and it is the product of a largely decentralized, voluntary collaboration across borders. It is a commons built by thousands of contributors.

The internet is nothing short of a miracle. It is the infrastructure that makes planetary-scale cross-pollination possible.

The question now is: what are the great challenges of our time, and how can we deliberately accelerate progress on them by applying the lessons Bell Labs taught us?

The privacy problem

Of all the challenges we face, privacy is among the most urgent. Surveillance is no longer the exception, it is the norm.

The stakes for advancing privacy in our everyday lives are high: surveillance is growing day by day, with governments buying massive databases from brokers, and corporations tracking our every move. The result is a chilling effect on human potential. Under constant observation people self-censor, conform, and avoid risk; creativity fades and dissent weakens.

Privacy reverses that. It creates the conditions for free thought and experimentation. In private, people can test controversial ideas, take risks, and fail without fear of judgment. That freedom is the soil in which innovation grows.

Privacy also safeguards autonomy. Without control over what we reveal and to whom, our decisions are subtly manipulated by those who hold more information about us than we hold about them. Privacy rebalances that asymmetry, letting us act on our own terms.

At a societal level, privacy prevents conformity from hardening into tyranny. If every action and association is observed, the boundaries of what is acceptable shrink to the lowest common denominator. Innovation, whether in science, art, or politics, requires the breathing room of privacy to flourish.

In short, privacy is not just a shield. It is a precondition for human flourishing, and for the breakthroughs that push civilization forward.

If we want freedom to survive in the digital age, we must apply the Bell Labs model to accelerate privacy innovation with the same deliberate force that once created the transistor and the laser.

Just as Bell Labs once directed its collective genius toward building the information age, we must now harness the internet’s collaborative power to advance the lived privacy of billions across the globe.

The call to build

Kelly’s insight was that breakthroughs do not have to be random. They can be nurtured, given structure, and accelerated. That is exactly what we need in the privacy space today.

The internet already gives us the structure for invention at a global scale. But privacy has lagged, because surveillance has stronger incentives: data is profitable, governments demand back doors, and convenience keeps people locked in. The internet is not a cure-all either: it produces noise, and unlike Bell Labs, there is no Kelly steering the ship. It’s up to us to curate what matters, chart our own course, and use these tools deliberately if we want them to move privacy forward.

The best future is not one of mass surveillance. It is one where people are free to think, create, and dissent without fear. Surveillance thrives because it is organized. Privacy must be too.

The future will not hand us freedom. We have to build it.

 

Yours in Privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

AI company pays billions in damages to authors

Published 10 September 2025
– By Editorial Staff
The AI company has used pirated books to train its AI bot Claude.
1 minute read

AI company Anthropic is paying $1.5 billion to hundreds of thousands of authors in a copyright lawsuit. The settlement is the first and largest of its kind in the AI field.

It was last year that authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson filed a lawsuit against Anthropic for using pirated books to train their AI Claude.

In June, a federal judge ruled that it was not illegal to train AI chatbots on copyrighted books, but that Anthropic had wrongfully obtained millions of books via pirate sites.

Now Anthropic has agreed to pay approximately $3,000 for each of the estimated 500,000 books covered. In total, this amounts to $1.5 billion.

First of its kind

The settlement is the first in a series of legal proceedings ongoing against AI companies regarding the use of copyrighted material for AI training. Among others, George R.R. Martin together with 16 other authors has sued OpenAI for copyright infringement.

As best as we can tell, it’s the largest copyright recovery ever, says Justin Nelson, lawyer for the authors, according to The Guardian. It’s the first of its kind in the AI era.

If Anthropic had not agreed to the settlement, experts say it could have cost significantly more.

We were looking at a strong possibility of multiple billions of dollars, enough to potentially cripple or even put Anthropic out of business, says William Long, legal analyst at Wolters Kluwer.

Spyware takes photos of porn users for blackmail

Published 9 September 2025
– By Editorial Staff
Strangely enough, Stealerium is distributed as free open source code on Github.
2 minute read

Security company Proofpoint has discovered malicious software that automatically photographs users through their webcams when they visit pornographic sites. The images are then used for extortion purposes.

The new spyware Stealerium has a particularly disturbing function: it monitors the victim’s browser for pornography-related search terms like “sex” and “porn”, while simultaneously taking screenshots and webcam photos of the user, sending everything to the hacker.

Security company Proofpoint discovered the software in tens of thousands of email messages sent since May this year. Victims were tricked into downloading the program through fake invoices and payment demands, primarily targeting companies in hospitality, education and finance.

— When it comes to infostealers, they typically are looking for whatever they can grab, says Selena Larson, researcher at Proofpoint to Wired.

— This adds another layer of privacy invasion and sensitive information that you definitely wouldn’t want in the hands of a particular hacker. It’s gross. I hate it, she adds.

Available openly on Github

In addition to the automated sextortion function, Stealerium also steals traditional data such as banking information, passwords and cryptocurrency wallet keys. All information is sent to the hacker via services like Telegram, Discord or email.

Strangely, Stealerium is distributed as free open source code on Github. The developer, who calls himself witchfindertr and claims to be a “malware analyst” in London, maintains that the program is “for educational purposes only”.

— How you use this program is your responsibility. I will not be held accountable for any illegal activities. Nor do i give a shit how u use it, the developer writes on the page.

Kyle Cucci, also a researcher at Proofpoint, calls automated webcam images of users browsing porn “pretty much unheard of”. The only similar case was an attack against French-speaking users in 2019.

New trend among cybercriminals

According to Larson, the new type of attacks may be part of a larger trend where smaller hacker groups are turning away from large-scale ransomware attacks that attract authorities’ attention.

— For a hacker, it’s not like you’re taking down a multimillion-dollar company that is going to make waves and have a lot of follow-on impacts. They’re trying to monetize people one at a time. And maybe people who might be ashamed about reporting something like this, Larson explains.

Proofpoint has not identified specific victims of the sextortion function, but believes that the function’s existence suggests it has likely already been used.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.