Monday, July 28, 2025

Polaris of Enlightenment

Researcher plays games using her mind

Published 22 September 2023
– By Editorial Staff
"Perri" played Valorant and controlled the game by means of head and eye control while firing weapons using "thought power".
1 minute read

Now, a psychology researcher with the Twitch username “Perrikaryal” has taken the concept of head control a step further by playing and streaming various TV and computer games using only her head and eye movements in combination with so-called electroencephalography (EEG) on her scalp.

“Perri”, as she is also called, has tested this experiment in various games such as Halo, Elden Ring, Trackmania, and Valorant (shown in the picture). She has also experimented with and tested singing as well as thought control. She controls her movement in the game through head and eye movements while she fires weapons and the like in the games through electrodes placed on the head, writes Swedish online computer magazine Sweclockers.

The ultimate goal is to make hands-free control complete (all buttons and joysticks) and easier than a regular hand controller, so that anyone can use it for a comparable gaming experience, she says.

Only a small percentage of the population is so physically disabled that they cannot use their hands at all and would benefit from playing games only with head movements.

If techniques for eye tracking and thought reading become sufficiently advanced, it could potentially give players an advantage over those who use more traditional pointing and input devices and could provide ergonomic relief for shoulders and wrists. At the same time, these techniques may have downsides in the form of other fatigue symptoms or side effects.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Amazon acquires AI company that records everything you say

Mass surveillance

Published today 20:05
– By Editorial Staff
3 minute read

Tech giant Amazon has acquired the Swedish AI company Bee, which develops wearable devices that continuously record users’ conversations. The deal signals Amazon’s ambitions to expand within AI-driven hardware beyond its voice-controlled home assistants.

The acquisition was confirmed by Bee founder Maria de Lourdes Zollo in a LinkedIn post, while Amazon told tech site TechCrunch that the deal has not yet been completed. Bee employees have been offered positions within Amazon.

AI wristband that listens constantly

Bee, which raised €6.4 million in venture capital last year, manufactures both a standalone wristband similar to Fitbit and an Apple Watch app. The product costs €46 (approximately $50) plus a monthly subscription of €17 ($18).

The device records everything it hears – unless the user manually turns it off – with the goal of listening to conversations to create reminders and to-do lists. According to the company’s website, they want “everyone to have access to a personal, ambient intelligence that feels less like a tool and more like a trusted companion.”

Bee has previously expressed plans to create a “cloud phone” that mirrors the user’s phone and gives the device access to accounts and notifications, which would enable reminders about events or sending messages.

Competitors struggle in the market

Other companies like Rabbit and Humane AI have tried to create similar AI-driven wearable devices but so far without major success. However, Bee’s device is significantly more affordable than competitors’ – the Humane AI Pin cost €458 – making it more accessible to curious consumers who don’t want to make a large financial investment.

The acquisition marks Amazon’s interest in wearable AI devices, a different direction from the company’s voice-controlled home assistants like Echo speakers. Meanwhile, ChatGPT creator OpenAI is working on its own AI hardware, while Meta is integrating its AI into smart glasses and Apple is rumored to be working on the same thing.

Privacy concerns remain

Products that continuously record the environment carry significant security and privacy risks. Different companies have varying policies for how voice recordings are processed, stored, and used for AI training.

In its current privacy policy, Bee says users can delete their data at any time and that audio recordings are not saved, stored, or used for AI training. However, the app does store data that the AI learns about the user, which is necessary for the assistant function.

Bee has previously indicated plans to only record voices from people who have verbally given consent. The company is also working on a feature that lets users define boundaries – both based on topic and location – that automatically pause the device’s learning. They also plan to build AI processing directly into the device, which generally involves fewer privacy risks than cloud-based data processing.

However, it’s unclear whether these policies will change when Bee is integrated into Amazon. Amazon has previously had mixed results when it comes to handling user data from customers’ devices.

The company has shared video clips with law enforcement from people’s Ring security cameras without the owner’s consent or court order. Ring also reached a settlement in 2023 with the Federal Trade Commission after allegations that employees and contractors had broad and unrestricted access to customers’ video recordings.

Proton launches privacy-focused AI assistant to compete with ChatGPT

The future of AI

Published yesterday 12:24
– By Editorial Staff
The AI assistant Lumo neither stores nor trains on users' conversations and can be used freely without login.
2 minute read

Proton challenges ChatGPT with its new AI assistant Lumo, which promises to never store or train on users’ conversations. The service launches with end-to-end encryption and stricter privacy protections than competing AI services.

The Swiss company Proton, known for its secure email services and VPN solutions, is now expanding into artificial intelligence with the launch of AI assistant Lumo. Unlike established competitors such as OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude, Proton markets its service with promises to never log, store or train its models on users’ questions or conversations.

Lumo can, just like other AI assistants, help users with everyday tasks such as rephrasing emails, summarizing documents and reviewing code. The major difference lies in privacy protection – all chats are end-to-end encrypted and not stored on Proton’s servers.

Privacy-focused alternative in the AI jungle

Proton’s strategy differs markedly from industry standards. ChatGPT stores conversations for 30 days for security reasons, even when chat history is turned off. Gemini may retain user queries for up to 72 hours, while Claude saves chats for up to a month, or longer if they are flagged for review.

An additional advantage for Proton is the company’s Swiss base, which means stricter privacy laws compared to American competitors who may be forced to hand over user data to authorities.

The company has not confirmed which models are used, but Lumo likely builds on smaller, community-developed systems rather than the massive, privately trained models that power services like ChatGPT. This may mean that responses become less detailed or nuanced.

Three service tiers

Lumo is available via the web as well as through apps for iOS and Android. The service is offered in three tiers: two free options and a paid version.

Guest users can ask a limited number of questions per week without an account, but chat history is not saved. Users with free Proton accounts automatically get access to Lumo Free, which includes basic encrypted chat history and support for smaller file uploads.

The paid version Lumo Plus costs approximately $12.99 per month ($9.99 with annual billing) and offers unlimited chats, longer chat history and support for larger file uploads. The price undercuts competitors – ChatGPT Plus, Gemini Advanced and Claude Pro all cost around $20 monthly.

The question that remains to be answered is how well Lumo will compete with models trained on significantly larger datasets. The most advanced AI assistants are powered by enormous amounts of user data, which helps them learn patterns and understand nuances for continuous improvement over time. Proton’s more limited, privacy-centered strategy may affect performance.

Your doctor’s visit isn’t private

Published yesterday 8:14
– By Naomi Brockwell
6 minute read

A member of our NBTV members’ chat recently shared something with us after a visit to her doctor.

She’d just gotten back from an appointment and felt really shaken up. Not because of a diagnosis, she was shaken because she realized just how little control she had over her personal information.

It started right at check-in, before she’d even seen the doctor.
Weight. Height. Blood pressure. Lifestyle habits. Do you drink alcohol? Are you depressed? Are you sexually active?
All the usual intake questions.

It all felt deeply personal, but this kind of data collection is normal now.
Yet she couldn’t help but wonder: shouldn’t they ask why she’s there first? How can they know what information is actually relevant without knowing the reason for the visit? Why collect everything upfront, without context?

She answered every question anyway. Because pushing back makes people uncomfortable.

Finally, she was through with the medical assistant’s questions and taken to the actual doctor. That’s when she confided something personal, something she felt was important for the doctor to know, but made a simple request:

“Please don’t record that in my file”.

The doctor responded:

“Well, this is something I need to know”.

She replied:

“Yes, that’s why I told you. But I don’t want it written down. That file gets shared with who knows how many people”.

The doctor paused, then said:

“I’m going to write it in anyway”.

And just like that, her sensitive information, something she explicitly asked to keep off the record, became part of a permanent digital file.

That quiet moment said everything. Not just about one doctor, but about a system that no longer treats medical information as something you control. Because once something is entered into your electronic health record, it’s out of your hands.

You can’t delete it.

You can’t restrict who sees it.

She Said “Don’t Write That Down.” The Doctor Did Anyway.

Financially incentivized to collect your data

The digital device that the medical assistant and doctor write your information into is called an Electronic Health Record (EHR). EHRs aren’t just a digital version of your paper file. They’re part of a government-mandated system. Through legislation and financial incentives from the HHS, clinics and hospitals were required to digitize patient data.

On top of that, medical providers are required to prove what’s called “Meaningful Use” of these EHR systems. Unless they can prove meaningful use, the medical provider won’t get their Medicare and Medicaid rebates. So when you’re asked about your blood pressure, your weight, and your alcohol use, it’s part of a quota. There’s a financial incentive to collect your data, even if it’s not directly related to your care. These financial incentives reward over-collection and over-documentation. There are no incentives for respecting your boundaries.

You’re not just talking to your doctor. You’re talking to the system

Most people have no idea how medical records actually work in the US They assume that what they tell a doctor stays between the two of them.

That’s not how it works.

In the United States, HIPAA states that your personally identifiable medical data can be shared, without needing to get your permission first, for a wide range of “healthcare operations” purposes.

Sounds innocuous enough. But the definition of health care operations is almost 400 words long. It’s essentially a list of about 65 non-clinical business activities that have nothing to do with your medical treatment whatsoever.

That includes not just hospitals, pharmacy systems, and insurance companies, but billing contractors, analytics firms, and all kinds of third-party vendors. According to a 2010 Department of Health and Human Services (HHS) regulation, there are more than 2.2 million entities (covered entities and business associates) with which your personally identifiable, sensitive medical information can be shared, if those who hold it choose to share it. This number doesn’t even include government entities with access to your data, because they aren’t considered covered entities or business associates.

Your data doesn’t stay in the clinic. It gets passed upstream, without your knowledge and without needing your consent. No one needs to notify you when your data is shared. And you’re not allowed to opt out. You can’t even get a list of everyone it’s been shared with. It’s just… out there.

The doctor may think they’re just “adding it to your chart”. But what they’re actually doing is feeding a giant, invisible machine that exists far beyond that exam room.

We have an entire video diving into the details if you’re interested: You Have No Medical Privacy

Data breaches

Legal sharing isn’t the only risk of this accumulated data. What about data breaches? This part is almost worse.

Healthcare systems are one of the top targets for ransomware attacks. That’s because the data they hold is extremely valuable. Full names, birth dates, Social Security numbers, medical histories, and billing information, all in one place.

It’s hard to find a major health system that hasn’t been breached. In fact, a 2023 report found that over 90% of healthcare organizations surveyed had experienced a data breach in the past three years.

That means if you’ve been to the doctor in the last few years, there’s a very real chance that some part of your medical file is already floating around, whether on the dark web, in a leaked ransomware dump, or being sold to data brokers.

The consequences aren’t just theoretical. In one high-profile case of such a healthcare breach, people took their own lives after private details from their medical files were leaked online.

So when your doctor says, “This is just for your chart,” understand what that really means. You’re not just trusting your doctor. You’re trusting a system that has a track record of failing to protect you.

What happens when trust breaks

Once you start becoming aware of how your data is being collected and shared, you see it everywhere. And in high-stakes moments, like a medical visit, pushing back is hard. You’re at your most vulnerable. And the power imbalance becomes really obvious.

So what do patients do when they feel that their trust has been violated? They start holding back. They say less. They censor themselves.

This is exactly the opposite of what should happen in a healthcare setting. Your relationship with your doctor is supposed to be built on trust. But when you tell your doctor something in confidence, and they say, “I’m going to log it anyway,” that trust is gone.

The problem here isn’t just one doctor. From their perspective, they’re doing what’s expected of them. The entire system is designed to prioritize documentation and compliance over patient privacy.

Privacy is about consent, not secrecy

But privacy matters. And not because you have something to hide. You might want your doctor to have full access to everything. That’s fine. But the point is, you should be the one making that call.

Right now, that choice is being stripped away by systems and policies that normalize forced disclosure.

We’re being told our preferences don’t matter. That our data isn’t worth protecting. And we’re being conditioned to stay quiet about it.

That has to change.

So what can you do?

First and foremost, if you’re in a high-stakes medical situation, focus on getting the care you need. Don’t let privacy concerns keep you from getting help.

But when you do have space to step back and ask questions, do it. That’s where change begins.

  • Ask what data is necessary and why.
  • Say no when something feels intrusive.
  • Let your provider know that you care about how your data is handled.
  • Support policy efforts that restore informed consent in healthcare.
  • Share your story, because this isn’t just happening to one person.

The more people push back, the harder it becomes for the system to ignore us.

You should be able to go to the doctor and share what’s relevant, without wondering who’s going to have access to that information later.

The exam room should feel safe. Right now, it doesn’t.

Healthcare is in urgent need of a privacy overhaul. Let’s make that happen.

 

Yours In Privacy,
Naomi

 

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Rumble.

Now you’re forced to pay for Facebook or be tracked by Meta

Mass surveillance

Published 22 July 2025
– By Editorial Staff
2 minute read

Social media giant Meta is now implementing its criticized “pay or be tracked” model for Swedish users. Starting Thursday, Facebook users in Sweden and some other EU-countries are forced to choose between paying €7 per month for an ad-free experience or accepting extensive data collection. Meanwhile, the company faces daily fines from the EU if the model isn’t changed.

Swedish Facebook users have been greeted since Thursday morning with a new choice when logging into the platform. A message informs them that “you must make a choice to use Facebook” and explains that users “have a legal right to choose whether you want to consent to us processing your personal data to show you ads.”

Screenshot from Facebook.

The choice is between two alternatives: either pay €7 monthly for an ad-free Facebook account where personal data isn’t processed for advertising, or consent to Meta collecting and using personal data for targeted ads.

As a third alternative, “less personalized ads” is offered, which means Meta uses somewhat less personal data for advertising purposes.

Screenshot from Facebook.

Background in EU legislation

The introduction of the payment model comes after the European Commission in March launched investigations of Meta along with Apple and Google for suspected violations of the DMA (Digital Markets Act). For Meta’s part, the investigation specifically concerns the new payment model.

In April, Meta was fined under DMA legislation and ordered to pay €200 million in fines because the payment model was not considered to meet legal requirements. Meta has appealed the decision.

According to reports from Reuters at the end of June, the social media giant now risks daily penalties if the company doesn’t make necessary changes to its payment model to comply with EU regulations.

The new model represents Meta’s attempt to adapt to stricter European data legislation while the company tries to maintain its advertising revenue through the alternative payment route.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.