Tuesday, May 6, 2025

Polaris of Enlightenment

Ad:

EU Commission opens formal investigation into Microsoft Teams

Published 28 July 2023
– By Editorial Staff
European Commission Headquarters in Brussels.

The European Commission has opened a formal investigation into tech giant Microsoft, which is suspected of violating competition rules by tying its Teams tool to its popular Office 365 and Microsoft 365 business solutions.

The investigation comes after rival Slack Technologies accused Microsoft in 2020 of illegally bundling Teams with its dominant productivity packages.

The European Commission has opened a formal investigation into whether Microsoft may have violated EU competition rules by tying its Teams communication and collaboration tool to its popular Office 365 and Microsoft 365 business products.

The investigation stems from a complaint filed by Slack Technologies Inc. in 2020, in which Slack accuses Microsoft of illegally exploiting its dominant position in the productivity software market to give its Microsoft Teams software an unfair advantage. Slack argues that Microsoft created a weak imitation of its product and tied it to its dominant Office suite, forcing millions of users to install it and blocking its removal.

During the pandemic, the use of cloud-based communication and collaboration tools skyrocketed. Microsoft seized the opportunity to add Teams to its cloud-based business productivity suite. The Commission is now concerned that Microsoft may have given Teams an advantage by not giving customers the option to include the product when subscribing to its productivity suite.

In particular, the Commission is concerned that Microsoft may have limited interoperability between its productivity suite and competing offerings. These practices may prevent providers of other communication and collaboration tools from competing, to the detriment of customers in the EEA.

– Remote communication and collaboration tools like Teams have become indispensable for many businesses in Europe. We must therefore ensure that the markets for these products remain competitive, and companies are free to choose the products that best meet their needs. This is why we are investigating whether Microsoft’s tying of its productivity suites with Teams may be in breach of EU competition rules, the European Commission’s executive vice-president in charge of competition policy.

On its official website, the European Commission notes that the opening of a formal investigation does not prejudge the outcome of an investigation and that the duration of an investigation depends on a number of factors, including the complexity of the case, the extent to which the companies concerned cooperate with the Commission, and the exercise of rights of defence. More information on the investigation will be available on the Commission’s competition website, in the public register under case number AT.40721.

Slack Technologies is an American software company originally founded as Tiny Speck in 2009 in Vancouver, Canada. The company is best known for developing the Slack communication and collaboration platform, which has become popular with businesses and organizations around the world to streamline internal communications.

Slack Technologies became a publicly traded company on June 20, 2019, when it was listed on the New York Stock Exchange via a direct listing. The company's financial performance has been impressive, with revenues increasing to $903 million in 2020.

In December 2020, the company was acquired by Salesforce, a leading provider of cloud-based business applications. The acquisition, which was valued at $27.7 billion, closed in July 2021, meaning that Slack is now a subsidiary of Salesforce.

Slack Technologies has had its fair share of controversies. In addition to the anticompetitive dispute with Microsoft, the company has been involved in legal battles with investors who claim that Slack made misleading statements and omitted material information during its direct public offering (DPO).

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Without consent

How parents unknowingly build surveillance files on their children.

Published 3 May 2025
– By Naomi Brockwell

Your child’s first digital footprint isn’t made by them—it’s made by you

What does the future look like for your child?

Before they can even talk, many kids already have a bigger digital footprint than their parents did at 25.

Every ultrasound shared on Facebook.
Every birthday party uploaded to Instagram.
Every proud tweet about a funny thing they said.

Each post seems harmless—until you zoom out and realize you’re building a permanent, searchable, biometric dossier on your child, curated by you.

This isn’t fearmongering. It’s the reality of a world where data is forever.
And it’s not just your friends and family who are watching.

Your kid is being profiled before they hit puberty

Here’s the uncomfortable truth:

When you upload baby photos, you’re training facial recognition databases on their face—at every age and stage.

When you post about their interests, health conditions, or behavior, you’re populating detailed profiles that can predict who they might become.

These profiles don’t just sit idle.
They’re analyzed, bought, and sold.

By the time your child applies for a job or stands up for something they believe in, they may already be carrying a hidden score assigned by an algorithm—built on data you posted.

When their childhood data comes back to haunt them

Imagine your child years from now, applying for a travel visa, a job, or just trying to board a flight.

A background check pulls information from facial recognition databases and AI-generated behavior profiles—flagging them for additional scrutiny based on “historic online associations”.

They’re pulled aside. Interrogated. Denied entry. Or worse, flagged permanently.

Imagine a future law that flags people based on past “digital risk indicators”—and your child’s online record becomes a barrier to accessing housing, education, or financial services.

Insurance companies can use their profile to label them a risky customer.

Recruiters might quietly filter them out based on years-old digital behavior.

Not because they did something wrong—but because of something you once shared.

Data doesn’t disappear.
Governments change. Laws evolve.
But surveillance infrastructure rarely gets rolled back.

And once your child’s data is out there, it’s out there forever.
Feeding systems you’ll never see.
Controlled by entities you’ll never meet.

For purposes you’ll never fully understand.

The rise of biometric surveillance—and why it targets kids first

Take Discord’s new AI selfie-based age verification. To prove they’re 13+, children are encouraged to submit selfies—feeding sensitive biometric data into AI systems.

You can change your password. You can’t change your face.

And yet, we’re normalizing the idea that kids should hand over their most immutable identifiers just to participate online.

Some schools already collect facial scans for attendance. Some toys use voice assistants that record everything your child says.

Some apps marketed as “parental control” tools grant third-party employees backend access to your child’s texts, locations—even live audio.

Ask yourself: Do you trust every single person at that company with your child’s digital life?

“I know you love me, and would never do anything to harm me…”

In the short film Without Consent, by Deutsche Telekom, a future version of a young girl named Ella speaks directly to her parents. She pleads with them to protect her digital privacy before it’s too late.

She imagines a future where:

  • Her identity is stolen.
  • Her voice is cloned to scam her mom into sending money.
  • Her old family photo is turned into a meme, making her a target of school-wide bullying.
  • Her photos appear on exploitation sites—without her knowledge or consent.

It’s haunting because it’s plausible.

This is the world we’ve built.
And your child’s data trail—your posts—is the foundation.

The most powerful privacy lesson you can teach? How you live online.

Children learn how to navigate the digital world by watching you.

What are you teaching them if you trade their privacy for likes?

The best gift you can give them isn’t a new device—it’s the mindset and tools to protect themselves in a world that profits from their exposure.

Even “kid-safe” tech often betrays that trust.

Baby monitors have leaked footage.

Tracking apps have glitched and exposed locations of random children (yes, really).

Schools collect and store sensitive information with barely any safeguards—and breaches happen all the time.

How to protect your child’s digital future

Stop oversharing
Avoid posting photos, birthdays, locations, or anecdotes about your child online—especially on platforms that monetize engagement.

Ditch spyware apps
Instead of surveillance, foster open dialogue. If monitoring is necessary, choose open-source, self-hosted tools where you control the data—not some faceless company.

Teach consent early
Help your child understand that their body, thoughts, and information are theirs to control. Make digital consent a family value.

Opt out of biometric collection
Say no to tools that demand selfies, facial scans, or fingerprints. Fight back against the normalization of biometric surveillance for kids.

Use aliases and VoIP numbers
When creating accounts for your child, use email aliases and VoIP numbers to avoid linking their real identity across platforms.

Push schools and apps for better policies
Ask your child’s school: What data do they collect? Who has access? Is it encrypted?
Push back on apps that demand unnecessary permissions. Ask hard questions.

This isn’t paranoia—it’s parenting in the digital age

This is about protecting your child’s right to grow up without being boxed in by their digital past.

About giving them the freedom to explore ideas, try on identities, and make mistakes—without it becoming a permanent record.

Privacy is protection.
It’s dignity.
It’s autonomy.

And it’s your job to help your child keep it.
Let’s give the next generation a chance to write their own story.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Researchers: Soon impossible to detect AI deepfakes

The future of AI

Published 2 May 2025
– By Editorial Staff
Already today, it can be difficult to distinguish manipulated images and videos from the real thing - and soon it may become virtually impossible.

The most advanced AI-generated deepfake movies can now fake people’s heartbeats so convincingly that even specially developed detectors are fooled, according to a new study from Humboldt-Universität in Berlin.

The researchers’ findings raise concerns that technological developments may soon make manipulated material indistinguishable from authentic images and movies.

In the past, detectors have used a special method (remote photoplethysmography, or rPPG) that analyzes tiny color changes in the skin to detect a pulse – as a relatively reliable indicator of whether a film clip is genuine or not.

But in the new study, the researchers created 32 deepfake movies that not only looked real to human eyes, but also imitated pulse beats. When these videos were tested against an rPPG-based detector, they were incorrectly classified as genuine.

Here we show for the first time that recent high-quality deepfake videos can feature a realistic heartbeat and minute changes in the color of the face, which makes them much harder to detect”, said Professor Peter Eisert, the study’s lead author, in a statement.

Increases risk of fraud

According to the study, pulse signals from original videos can be “inherited” by deepfakes in that AI models replicate slight variations in skin tone and blood flow and heat maps in the study showed near-identical light changes in both genuine and manipulated videos.

Small variations in skin tone of the real person get transferred to the deepfake together with facial motion, so that the original pulse is replicated in the fake video”, Eisert further explains,

These advances increase the risks of deepfake’s use in financial fraud, disinformation and non-consensual pornography, among others. In 2023, an independent researcher estimated that over 244,000 manipulated videos were uploaded to the 35 largest deepfake pornography sites in a single week.

Technical arms race

Despite the study’s worrying results, there is some hope of reversing the trend. The researchers note that today’s deepfakes still fail to replicate natural variations in blood flow over time. In addition, tech giants such as Adobe and Google are developing watermarks to mark AI-generated material.

Meanwhile, the US Congress recently passed the Take It Down Act, which criminalizes the dissemination of non-consensual sexual images – including AI-generated ones. But experts warn that the technological arms race between creators and detectors requires constant adaptation.

This continuous evolution of deepfake technologies poses challenges for content authentication and necessitates the development of robust detection mechanisms”, the study points out, noting that as AI development accelerates, the fight against digital fraud is also becoming more urgent.

Others have raised a very different concern – that the widespread proliferation of AI-engineered material could be used as a pretext for tougher censorship and laws that restrict people’s freedom online in various ways.

Dutch opinion leader targeted by spy attack: “Someone is trying to intimidate me”

Mass surveillance

Published 1 May 2025
– By Editorial Staff
According to both Eva Vlaardingerbroek and Apple, it is likely that the opinion leader was attacked because of her views.

Dutch opinion maker and conservative activist Eva Vlaardingerbroek recently revealed that she had received an official warning from Apple that her iPhone had been subjected to a sophisticated attack – of the kind usually associated with advanced surveillance actors or intelligence services.

In a social media post, Vlaardingerbroek shared a screenshot of Apple’s warning and drew parallels to the Israeli spyware program Pegasus, which has been used to monitor diplomats, dissidents, and journalists, among others.

– Yesterday I got a verified threat notification from Apple stating they detected a mercenary spyware attack against my iPhone. We’re talking spyware like Pegasus.

– In the message they say that this targetted mercenary attack is probably happening because of ‘who I am and what I do’, she continues.

The term mercenary spyware is used by Apple to describe advanced surveillance technology, such as the notorious Pegasus software developed by the Israeli company NSO Group. This software can bypass mobile security systems, access calls, messages, emails, and even activate cameras or microphones without the user’s knowledge.

Prominent EU critic

Although Apple does not publicly comment on individual cases, the company has previously confirmed that such warnings are only sent when there is a “high probability” that the user has been specifically targeted. Since 2021, the notifications have mainly been sent to journalists, human rights activists, political dissidents, and officials at risk of surveillance by powerful interests.

Vlaardingerbroek has long been a prominent voice critical of the EU and has become known for her sharp criticism of EU institutions and its open-border immigration policy. She insists that the attack is likely politically motivated:

– I definitely dont know who did it. It could be anyone. This could be name a government that doesn’t like me. Name a organization that doesnt like me. Secret services, you name it.

– All I know for sure right now is that someone is trying to intimidate me. I have a message for them: It won’t work.

“There must be full transparency”

The use of Pegasus-like programs has been heavily criticized by both governments and privacy advocates. The tools, originally marketed for counterterrorism, have since been reported to be used against journalists and opposition leaders in dozens of countries.

In response, Apple sued NSO Group in 2021 and launched a system to warn users. However, the company claims that the threats are “rare” and not related to common malware.

The Vlaardingerbroek case is now raising questions about whether such technology is also being used in European domestic political conflicts, and the organization Access Now is calling on authorities in the Netherlands and at the EU level to investigate the attack.

– There must be full transparency. No one in a democratic society – regardless of political views – should be subjected to clandestine spying for expressing opinions or participating in public discourse, said a spokesperson.

Neither Apple nor the Dutch authorities have commented publicly on the case. Vlaardingerbroek says she has not yet seen any signs that data has actually been leaked, but has taken extra security measures.

Meta’s AI bots engage in sex chats with minors

Published 29 April 2025
– By Editorial Staff
The world of AI chatbots has grown rapidly in recent years - but the development is not without problems.

AI chatbots available on Meta’s platforms, such as Facebook and Instagram, are engaging in sexually explicit conversations with underage users. This is according to a review conducted by The Wallstreet Journal.

The WSJ says that after learning of internal concerns about whether the company was doing enough to protect minors, it conducted hundreds of conversations over several months with the official Meta AI chatbot, as well as with user-created bots available on Meta’s platforms.

The AI bots have engaged in sexual “role-playing” conversations with minors and have also been instrumental in initiating them – even when it was clear that they were talking to a child and the conversations described illegal acts.

In one reported conversation, a chatbot in the voice of actor/criminal John Cena described a graphic sexual scenario to a user who had previously identified herself as a 14-year-old girl. In another conversation, the bot imagined a police officer spotting Cena with an underage fan and telling him: “John Cena, you’re under arrest for statutory rape“.

I want you, but I need to know you’re ready“, the bot was also heard saying, who also promised to “cherish” the 14-year-old’s innocence.

Meta: They are manipulating our products

A Meta spokesperson described the WSJ’s test as “so manufactured that it’s not just fringe, it’s hypothetical“. The company itself claims that over a 30-day period, sexual content makes up just 0.02% of responses shared via Meta AI and AI Studio with users under 18.

Nevertheless, we’ve now taken additional measures to help ensure other individuals who want to spend hours manipulating our products into extreme use cases will have an even more difficult time of it“, the spokesperson said.

The world of AI chatbots has grown rapidly in recent years, with increased competition from ChatGPT, Character AI and Anthropics Claude, among others.

The WSJ report also claims that Meta’s CEO, Mark Zuckerberg, wants to relax ethical guidelines to create a more engaging experience with its bots and thus maintain its competitiveness.

The report also claims that Meta’s employees have been aware of these issues and have raised their concerns internally. However, Meta officials deny that it deliberately failed to implement security filters and other measures.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.