Sunday, October 19, 2025

Polaris of Enlightenment

AI expected to boost Swedish economy – and leave many jobless

Published 15 April 2024
– By Editorial Staff
A wide variety of practical tasks can now be performed by AI-controlled robotic workers.
2 minute read

Artificial intelligence could soon add up to SEK 550 billion to Sweden’s GDP, according to a report commissioned by Google.

At the same time, a number of professions are expected to be automated and “disappear” as a result of AI developments – including translators, service workers and salespeople. It is also unclear who will reap the benefits.

According to the report, which was commissioned by Google and prepared by the consulting firm Implement, so-called “generative AI” could increase Sweden’s annual GDP by about 9 percent – or between 500 and 550 billion kronor – over the next decade.

Robot workers in factories and industries are already a reality in many places. The next major step in the automation of society is expected to affect mainly office services, and it is in areas such as information, communication, finance, insurance, business services, education and health care that artificial intelligence is expected to have the greatest “productivity gains” in the future.

Generative AI is a type of AI that can create digital content such as text, images, music or movies by “learning” from large amounts of data and generating new unique content similar to what it has been trained on.

Optimize or replace

– Here, generative AI tools are more likely to complement what humans do. It’s only a small part of the jobs that will be significantly affected by automation, Anna Wikland, Google’s head of Sweden, claims, according to the tax-funded SVT.

Six percent of jobs are expected to be replaced on a large scale by AI in the future – including call center staff, office support functions, technicians, salespeople, service personnel and translators, and many other professions are expected to be affected in various ways by the development of the technology.

At the same time, those who work in construction, cleaning, cooking or caregiving need not be particularly concerned, as it is considered unlikely that AI will have any significant impact at all.

“Need for discussion”

US authorities have previously warned that “AI can increase income inequality if it is used to replace people in low-income jobs while strengthening people in high-income jobs”, and there are concerns that big companies will reap the profits from AI developments – while ordinary people see their jobs disappear forever.

– At the beginning of industrialization, it seemed that all productivity gains were almost exclusively in the hands of corporations, but then there was a political moment when it was reversed and redistributed in different ways across society, says Nicklas Lundblad, a member of the Swedish government’s AI commission.

– Everyone has to take a stand. There is no need for it to go one way or the other. We need a discussion as a society, he continues.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

5 ways that the Patriot Act destroyed financial privacy

Published yesterday 8:09
– By Naomi Brockwell
6 minute read

This week I was asked to give a presentation in DC about the history of financial surveillance. Now, most people know that the Patriot Act did tremendous damage to privacy in general. But fewer people understand the extent of damage that it did to financial privacy in particular.

Basically, the Patriot Act was the Bank Secrecy Act on steroids.

In this newsletter I want to look at Title III of the Patriot Act: The International Money Laundering Abatement and Financial Anti-Terrorism Act of 2001, and 5 ways that the Patriot Act destroyed financial privacy.

This was a moment in time that radically expanded financial surveillance under what we were told was a temporary measure, but it ended up lasting forever.

1. KYC

The Patriot Act standardized and mandated “Know Your Customer” (KYC) rules across all financial institutions.

Before 2001, “KYC” existed in principle but was largely determined by banks. Institutions determined their own risk tolerance, and what customer information they would collect. A community bank might rely on non-documentary checks and longstanding relationships, while a larger bank might collect more documents.

For example, if Betty wanted to open a bank account and you’ve known her since she was 5, and you knew her parents for 20 years and they’ve held an account with you for years, you might already have a pretty good understanding of the risk level of that potential customer. As a business, you would determine what you needed from them in order to let them open an account.

The Patriot Act introduced minimum ID standards. It enforced a Customer Identification Program (CIP) for every bank, broker‑dealers, mutual funds, and other similar institute in the US. These entities had to collect and verify government-issued IDs for every customer. They also had to cross-check identities against government watchlists.

This is when privacy in banking effectively ended and financial anonymity became illegal. “Risk-based KYC” went from a business choice to a legal requirement.

2. Expanded definitions

The Patriot Act broadened the definition of a “money transmitter”: Before 2001 it was just “a licensed sender of money”. After the change it covered any person engaged as a business in transmitting funds, including informal money transfer systems. That was a big expansion.

The Patriot Act also imposed an AML‑program mandate across financial institutions, extending coverage for different sectors.

This massively widened the surveillance reach, pulling basically every financial touchpoint into a federal dragnet.

3. Data sharing

The Patriot Act allowed unprecedented data sharing across agencies and borders.

Sections 351 and 358 broke down specific information-sharing barriers between the FBI, CIA, NSA, FinCEN, and foreign governments.

For example, SARs (Suspicious Activity Reports) were formalized under the 1992 Annunzio-Wylie Anti–Money Laundering Act to strengthened reporting rules. Banks became required to file reports against ANYTHING they deemed suspicious about how someone was using their own money. SARs were originally confined to Treasury oversight, and could be shared with law enforcement. But the Patriot Act expanded this access so that now these could be shared freely with intelligence agencies.

Under the Annunzio-Wylie Anti–Money Laundering Act it was already illegal for banks to tell customers when a suspicious activity report was filed. But with the Patriot Act came extended “safe harbor” provisions, where banks were encouraged to proactively share customer data with intelligence, without fear of being sued by the customer because they would have legal immunity.

It covered liability under “any contract or other legally enforceable agreement”, So if you had a contract with your bank that they’d keep your information private? The government said the bank now had immunity if they shared that information and broke the contract.

(Just to put this into context: The 4th amendment is mean to stop the government getting your information without a warrant. So instead, the government mandated that banks collect that information, and then granted the banks legal immunity for sharing that information with the government. An egregious overstep of what was meant to be a constitutional protection, if ever I’ve seen one.)

Additionally Section 314(b) created a safe harbor financial institutions to share customer and activity information with other financial institutions, when they in good faith suspect money laundering or terrorist financing.

So the net result of these safe harbor rules was that banks were both REQUIRED to report SARs and other information to the government, and they were legally shielded from aggressively and proactively doing so, and were also allowed to exchange intelligence with other banks. It basically fueled the private-sector surveillance grid that we have today, and deputized the financial system as investigatory agents in it.

Mass data pipelines from private banks to the surveillance state were legalized overnight.

4. Foreign surveillance

The Patriot Act authorized surveillance of correspondent and foreign accounts.

A “correspondent account” is a US bank account opened by a foreign bank so that foreign customers can move dollars, clear wires, and access the US financial system.

Think of it as the on-ramp to the dollar network for non-US banks.

The Patriot Act forced US banks to perform “enhanced due diligence” on all correspondent accounts held for foreign banks. It made dollar-clearing a surveillance chokepoint: any transaction touching the US financial system was now subject to monitoring.

It also added extraterritorial subpoena and forfeiture reach. If a foreign bank uses a US correspondent account, US authorities can subpoena records held abroad related to that account, and can freeze or take money sitting in that US account to enforce a seizure. This extended American surveillance standards globally, and made access to dollars conditional on cooperation. It also allowed for the override of local confidentiality or privacy rules. The result is a chilling effect: many foreign banks simply close accounts for whole customer groups or regions to avoid US penalties, even when those customers are legal where they live.

5. Bank/Intelligence marriage

The Patriot Act hard-wired banks into intelligence investigations, and made the relationship permanent. For example, it introduce something called government “broadcast lookups”. This is where FinCEN can blast a query to thousands of financial institutions (like “do you have anything on X person/entity?” or “do you have anything matching these patterns?”) and banks must search their records quickly and report back.

It shifted the relationship from requiring passive reporting from banks to creating on-demand, system-wide queries, where banks have been deputized as active responders and participants.

Under the Patriot Act, FinCEN’s mission was also codified as financial intelligence. Congress tied it explicitly to collecting, analyzing, and disseminating financial data in support of law-enforcement and intelligence, giving the FinCEN a permanent mandate and making it a statutory intel hub.

The new normal

The Patriot Act took a crisis, used the opportunity to create a mass surveillance program in the financial sector, and then rewrote the rules for how money is allowed to move, who gets to participate, and what the government can see. Then it quietly froze those rules in place until most people forgot there had ever been another way.

But we don’t have to accept this new normal, where every customer is now treated like a suspect, or where you have to beg for permission to access your own money and hope that the person holding on to it doesn’t instead file a secret report about you.

The financial sector was conscripted into the surveillance regime because it provided a loophole to avoid Fourth-Amendment protections. We should instead insist on real warrants, not outsource surveillance to private companies.

But if we can’t roll back what has become an ingrained surveillance overreach that we all take for granted these days, at least there are now decentralized payment systems that don’t opt in to traditional financial rails at all. These give people back human dignity, instead of egregious violation of their financial privacy.

I think that we also need to tell a better story about risk, because endless de-risking has become a license for collective punishment that shuts people out of the financial system entirely. Of course we don’t want to protect criminals – this is about restoring traditional check and balances, as well as basic civic norms, that used to be obvious: you should be able to use your own money without being tracked, profiled, and stored forever in a government database.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

How young people are manipulated into shopping more

Published 16 October 2025
– By Editorial Staff
Every time young people pick up their phones, they're met with purchase prompts from fashion companies – a constant stream of manipulative content.
2 minute read

Several fashion companies use so-called “purchase trigger mechanisms” on their websites to manipulate customers into buying more, according to a survey by the Swedish Society for Nature Conservation (Naturskyddsföreningen). Another study shows that young girls are constantly exposed to a stream of these purchase triggers on their mobile phones.

The survey examined the presence of purchase triggers on the websites of nine major fashion companies. The sites reviewed were H&M, Lindex, KappAhl, NA-KD, Nelly, Boozt, Zalando, Ellos, and Shein.

Purchase triggers refer to content that simply aims to increase consumption on the sites by, for example, exploiting psychological and cognitive weaknesses in consumers.

In total, eleven such purchase triggers were examined, identified as: urgency/time scarcity, scarcity, popularity, exclusivity, inspiration, low risk, incentives/offers, shortcuts, lock-in mechanisms, good deals, and user-generated content.

The results show that all reviewed fashion companies use purchase triggers, but to varying degrees and intensity. Nelly, Boozt, Zalando, and Shein, for example, use all identified triggers, while H&M uses them to a more limited extent. Shein was the worst offender, followed by Ellos.

It’s frightening that all the major fashion companies we looked at use so many purchase triggers to get us to shop more and more. These triggers are based on psychological shortcuts that make us act on our buying impulses and awaken our desire to have things, which in turn leads to an unhealthy consumption pattern. This affects both the environment and climate, as well as our well-being, says Beatrice Rindevall, chair of the Swedish Society for Nature Conservation, in a press release.

“Commercial media”

In another study, the Swedish Society for Nature Conservation commissioned Ungdomsbarometern (Youth Barometer) to investigate how eight fashion-interested girls aged 16–17 are exposed to purchase triggers. The girls were asked to observe and take screenshots over several days, as well as answer questions about how they acted on what appeared on their phones in social media and their inbox.

The results show that the girls had a constant stream of purchase triggers on their phones with content that encouraged purchases.

It shouldn’t be called social media anymore, but more accurately commercial media. It’s high time that politicians realize how vulnerable young people are and that stricter regulations are introduced for how digital consumption environments may be designed, says Rindevall.

Safety apps normalize surveillance of children

Mass surveillance

Published 15 October 2025
– By Editorial Staff
Swedish researcher Katarina Winter warns that surveillance of children has become normalized when technology is packaged as care rather than control.
3 minute read

Apps promised to increase safety are often used for everyday logistics – and normalize secret surveillance.

Researchers at Stockholm University have examined 48 Swedish safety apps and warn that the technology is packaged as care while ethical questions disappear.

In two research projects at Stockholm University in Sweden, researchers are investigating various safety technologies in Sweden – everything from digital safety maps and security sensors to apps marketed as tools for creating safer communities. But instead of measuring whether the technology works, the researchers critically examine its consequences.

— It’s important to ask what kind of safety we’re after, and for whom? What is worth calling safety? Which actors and interests determine what constitutes safety in a society? The project on safety apps shows, among other things, how surveillance becomes normalized when we use this technology, says Katarina Winter, associate professor and senior lecturer in criminology and doctor in sociology at Stockholm University.

She leads the projects, which are conducted in collaboration with researchers from the University of Gävle and Södertörn University. The researchers have mapped 48 Swedish safety apps and interviewed both developers and users, including parents who use apps to keep track of their children.

“The technology is so kindly framed”

A central finding is how normalized it has become to monitor children, often without their knowledge.

— One example is how normalized it has become to monitor your children even though they don’t know about it, although some have an agreement with their children. Because the technology is so kindly framed – that it’s about protecting the children – it doesn’t become something you have to stand up for as a parent. The normalization can therefore happen under the radar. When technology is packaged as care, we easily lose sight of the ethical questions, she explains.

The surveillance also affects family relationships.

— Many use the apps to avoid nagging their children, and in the short term it may be convenient and simplify family logistics. But something happens on an interpersonal level, we cut off part of the interaction between each other. It’s seen as deviant behavior if you don’t want to share your location, which I think is negative.

Confusing messages during adult education center shooting

The researchers see a clear discrepancy between developers’ ideals about a safer society and how the apps are actually used. For private individuals, it’s often about completely different things than safety.

— In a way, these parents reproduce an insecurity in society related to crime and vulnerability when they justify why they use an app. But in reality, it’s often extremely connected to everyday logistics – when should I start cooking the pasta depending on where my child is? explains the criminologist.

The researchers have also examined the school safety app CoSafe, which was used during the shooting at Campus Risbergska, an adult education center in Malmö, southern Sweden. The app was criticized for sending contradictory alerts about both evacuation (leaving the building) and lockdown (staying inside and seeking shelter). Of the total eleven people killed, two students followed the instruction to evacuate instead of seeking shelter indoors.

— The Risbergska case demonstrates the complexity of technical solutions for crisis situations. While the app may have helped some seek shelter, the incident raises important questions about responsibility distribution and technical reliability when it comes to life and death, Winter notes.

Private actors profit from insecurity

The researcher also sees how private companies use the public debate about insecurity to sell their solutions, particularly to municipalities.

— We have both a political landscape that focuses on insecurity and a market that takes it on because it’s in focus. It’s logical that opportunities for entrepreneurship are found in the societal debate we’re in, but it becomes more brutal when it comes to safety than with other phenomena. Partly because actors profit from portraying society as unsafe, and partly because companies are generally interested in specific user groups that may not have many safety problems.

She calls for a critical attitude toward technological optimism.

— It’s important to pause on these questions that otherwise tend to rush ahead in a kind of faith that ‘now everything will be better because we have new technology’. When the overarching word is safety, questions about surveillance and privacy risk being deprioritized.

Netherlands invokes emergency law to seize Chinese chip manufacturer

Published 14 October 2025
– By Editorial Staff
Nexperia, a Dutch semiconductor manufacturer, specializes in large-scale production of microchips for the automotive industry and consumer electronics.
2 minute read

The Netherlands has seized the Chinese-owned semiconductor manufacturer Nexperia by invoking a never-before-used emergency law.

Owner Wingtech Technology is hitting back hard, calling the decision “excessive intervention driven by geopolitical bias” – and stating that the company has followed all laws and regulations.

Late Sunday evening, the Dutch Ministry of Economic Affairs announced that an emergency law had been used to nationalize Nexperia. The decision is justified by the risk that access to the company’s microchips could “become unavailable in an emergency” – something the government says threatens economic security in both the Netherlands and the EU.

Amsterdam describes the measure as “highly exceptional” and refers to “recent and acute signals of serious governance shortcomings and actions” within the company.

Nexperia produces microchips in large volumes for the automotive and electronics industries. The company was previously part of Dutch electronics giant Philips before being acquired by Chinese company Wingtech Technology.

The news had an immediate effect on the stock market. Wingtech shares fell 10 percent in Shanghai on Monday and were forced into a trading halt after reaching the daily decline limit.

In a later filing to the Shanghai Stock Exchange, Wingtech announced that the company’s control over Nexperia would be temporarily restricted due to the Dutch decision and court rulings affecting decision-making and operational efficiency.

Trade tensions between EU and China

Wingtech condemned the Dutch government’s action in a now-deleted WeChat post. The company called the decision “excessive intervention driven by geopolitical bias, rather than a fact-based risk assessment”.

The technology company added that it intends to take action to protect its rights and seek government support.

The nationalization comes amid a period of sharply increased trade tensions between the EU and China. Over the past year, the countries have clashed over what the EU claims is Beijing’s dumping of key goods and industrial overproduction. China has in turn accused the EU of protectionism.

Last week, China tightened its restrictions on exports of rare earth metals and magnets – a move that could further damage European industries dependent on these materials.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.