Saturday, June 7, 2025

Polaris of Enlightenment

Lock down your Mac

No Apple ID, no tracking, no nonsense.

Published 17 May 2025
– By Naomi Brockwell
6 minute read

Apple markets itself as a privacy-focused company. And compared to Google or Microsoft, it is. But let’s be clear: Apple is still collecting a lot of your data.

If you want the most private computer setup, your best option is to switch to Linux. Not everyone is ready to take that step though, and many might prefer to keep their existing computer instead.

If you want to keep your current device but make it more private, what are your options?

Windows is basically a privacy disaster. Privacy expert Michael Bazzell says in his book Extreme Privacy:

I do not believe any modern Microsoft Windows system is capable of providing a secure or private environment for our daily computing needs. Windows is extremely vulnerable to malicious software and their telemetry of user actions is worse than Apple’s. I do not own a Windows computer and I encourage you to avoid them for any sensitive tasks”.

If you want to keep your Mac without handing over your digital life to Apple, there are ways to lock it down and make it more private.

In this article, I’ll walk you through how to set up a Mac for better privacy—from purchasing the computer to tweaking your system settings, installing tools, and blocking unwanted data flows.

We’ll be following the setup laid out by Michael Bazzell in Extreme Privacy, with some added tips from my own experience.

We also made a video tutorial that you can follow along.

You don’t need to do everything. Each chapter is modular. But if you follow the full guide, you’ll end up with a Mac that doesn’t require an Apple ID, doesn’t leak constant data, and gives you control over your digital environment.

Buying your Mac

Choose a model that still gets security updates

Apple eventually drops support for older devices. A privacy-hardened system isn’t useful if it doesn’t receive security updates.

Two helpful sites:

Pay with cash in a physical store

If you buy a Mac with a credit card, the serial number is forever linked to your identity.
Cash keeps you anonymous. You might get strange looks, but it’s completely within your rights. Be polite. Be firm. They’ll grumble. That’s fine.

Fresh install of macOS

If it’s a refurbished Mac—or even brand new—it’s worth doing a clean install.

Update macOS

  • System Settings > General > Software Update
  • Install updates, reboot, and reach the welcome screen.

Erase all content

  • System Settings > General > Transfer or Reset > Erase All Content and Settings
  • Enter your password, confirm warnings
  • Your Mac will restart and erase itself

This restores factory defaults: user data and settings are gone, but the OS remains installed.

Optional: Wipe the disk completely (advanced)

If you want a truly clean install, you’ll need to manually erase the entire internal disk. Only do this if you’re comfortable in recovery mode.

Modern Macs split the system into two parts—a sealed system volume and a data volume—tied together with something called firmlinks. If you don’t erase both correctly, you can end up with phantom volumes that clog your disk and break things silently.

Steps:

  • Enter Recovery Mode:
    • Apple Silicon: Hold power > click “Options”
    • Intel: Hold Command + R on boot
  • Open Disk Utility
  • Click View > Show All Devices
  • Select the top-level physical disk (e.g., “Apple SSD”)
  • Click Erase
    • Name: Macintosh HD
    • Format: APFS
    • Scheme: GUID Partition Map

Warning: Skip “Show All Devices” or erase the wrong item and you could brick your Mac. Only do this if you understand what you’re doing.

Once erased, return to the recovery menu and choose Reinstall macOS.

First boot setup

macOS wants to immediately link your device to iCloud and Apple services. Stay offline as long as possible.

Setup tips:

  • Region: Choose your location
  • Accessibility: Skip
  • Wi-Fi: Click “Other Network Options” > “My computer does not connect to the internet”
  • Data & Privacy: Continue
  • Migration Assistant: Skip (we’re starting fresh!)
  • Apple ID: Choose “Set up later”
  • Terms: Agree
  • Computer Name: Use a generic name like Laptop or Computer
  • Password: Strong and memorable. No hint. Write it down somewhere safe.
  • Location Services: Off
  • Time Zone: Set manually
  • Analytics: Off
  • Screen Time: Skip
  • Siri: Skip
  • Touch ID: Optional
  • Display Mode: Your choice

Harden system settings

Wi-fi & bluetooth

  • System Settings > Wi-Fi: Turn off
    • Disable “Ask to join networks” and “Ask to join hotspots”
  • System Settings > Bluetooth: Turn off

Firewall (built-In)

  • System Settings > Network > Firewall: Turn on
    • Disable “Automatically allow built-in software…”
    • Disable “Automatically allow downloaded signed software…”
    • Enable Stealth Mode
    • Remove any pre-approved entries

Notifications

  • System Settings > Notifications
    • Show Previews: Never
    • Turn off for Lock Screen, Sleep, and Mirroring
    • Manually disable for each app

Sound settings

  • System Settings > Sound
    • Alert Volume: Minimum
    • Disable sound effects and interface feedback

AirDrop & sharing

  • System Settings > General > AirDrop & Handoff: Turn everything off
  • System Settings > General > Sharing: Disable all toggles

Siri & Apple Intelligence

  • System Settings > Siri & Dictation: Disable all
  • Disable Apple Intelligence and per-app Siri access

Switch time server

Your Mac pings Apple to sync the time—leaking your IP every time it does.
Switch to a decentralized time server instead.

How:

  • System Settings > General > Date & Time
  • Click “Set…” > Enter password
  • Enter: pool.ntp.org
  • Click Done

Spotlight & gatekeeper

Spotlight

  • System Settings > Spotlight: Turn off “Help Apple improve search”

Gatekeeper

Gatekeeper prevents you from opening non-Apple-approved apps and sends app data to Apple.

If you’re a confident user, disable it:

  • Terminalsudo spctl --master-disable
  • System Settings > Privacy & Security: Allow apps from anywhere

FileVault & lockdown mode

FileVault

Encrypt your entire disk:

  • System Settings > Privacy & Security > FileVault: Turn on
  • Choose “Create a recovery key and do not use iCloud”
  • Write down your recovery key. Store it OFF your computer.

Lockdown mode (Optional)

Restricts features like USB accessories, AirDrop, and others. Useful for high-risk users.

Customize appearance & finder

Desktop & dock

  • Disable “Show Suggested and Recent Apps”
  • Disable “Recent apps in Stage Manager”

Wallpaper

Use a solid color instead of version-specific defaults to reduce your system’s fingerprint.

Lock screen

  • Screensaver: Never
  • Require password: Immediately
  • Sleep timer: Your preference (e.g. 1 hour)

Finder preferences

  • Show all file extensions
  • Hide Recents and Tags
  • Set default folder to Documents
  • View hidden files: Shift + Command + .

Block outbound connections

macOS and many apps connect to servers without asking. You’ll want to monitor and block them.

Use Little Snitch (or LuLu)

Browser

Install a privacy-respecting browser like Brave or Mullvad.

Compare options at privacytests.org

VPN

Use trusted providers like Mullvad or ProtonVPN.

Be careful which VPN you download — they’re often scamware and data collection tools.
Watch this video for more

Optional: Use Homebrew

Instead of the App Store, install software via Homebrew.

We’ll cover this more in a future guide.

Final takeaways

If you followed this guide, you now have:

  • A Mac with no Apple ID
  • No iCloud tether
  • Full disk encryption (FileVault)
  • A silent firewall
  • Blocked outbound connections
  • A private browser and VPN setup

You’ve taken serious steps to reclaim your digital autonomy. Well done.

In an upcoming guide, we’ll explore how to take the next step: switching to Linux.

Thanks again to Michael Bazzell for his work.

Find his book Extreme Privacy at: inteltechniques.com/book7.html

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Opt-in childhood

What we signed them up for before they could object.

Published today 7:48
– By Naomi Brockwell
6 minute read

A few weeks ago, we published an article about oversharing on social media, and how posting photos, milestones, and personal details can quietly build a digital footprint for your child that follows them for life.

But social media isn’t the only culprit.

Today, I want to talk about the devices we give our kids: the toys that talk, the tablets that teach, the monitors that watch while they sleep.

These aren’t just tools of convenience or connection. Often, they’re Trojan horses, collecting and transmitting data in ways most parents never realize.

We think we’re protecting our kids.
But in many cases, we’re signing them up for surveillance systems they can’t understand, and wouldn’t consent to if they could.

How much do you know about the toys your child is playing with?

What data are they collecting?
With whom are they sharing it?
How safely are they storing it to protect against hackers?

Take VTech, for example — a hugely popular toy company, marketed as safe, educational, and kid-friendly.

In 2015, VTech was hacked. The breach wasn’t small:

  • 6.3 million children’s profiles were exposed, along with nearly 5 million parent accounts
  • The stolen data included birthdays, home addresses, chat logs, voice recordings… even photos children had taken on their tablets

Terms no child can understand—but every parent accepts

It’s not just hackers we should be mindful of — often, these companies are allowed to do almost anything they want with the data they collect, including selling it to third parties.

When you hand your child a toy that connects to Wi-Fi or Bluetooth, you might be agreeing to terms that say:

  • Their speech can be used for targeted advertising
  • Their conversations may be retained indefinitely
  • The company can change the terms at any time, without notice

And most parents will never know.

“Safe” Devices With Open Doors

What about things like baby monitors and nanny cams?

Years ago, we did a deep dive into home cameras, and almost all popular models were built without end-to-end encryption. That means the companies that make them can access your video feed.
How much do you know about that company?
How well do you trust every employee who might be able to access that feed?

But it’s not just insiders you should worry about.
Many of these kiddy cams are notoriously easy to hack. The internet is full of real-world examples of strangers breaking into monitors, watching, and even speaking to infants.

There are even publicly available tools that scan the internet and map thousands of unsecured camera feeds, sortable by country, type, and brand.
If your monitor isn’t properly secured, it’s not just vulnerable — it’s visible.

Mozilla, through its Privacy Not Included campaign, audited dozens of smart home devices and baby monitors. They assessed whether products had basic security features like encryption, secure logins, and clear data-use policies. The verdict? Even many top-selling monitors had zero safeguards in place.

These are the products we’re told are protecting our kids.

Apps that glitch, and let you track other people’s kids

A T-Mobile child-tracking app recently glitched.
A mother refreshed the screen—expecting to see her kids’ location.
Instead, she saw a stranger’s child. Then another. Then another.

Each refresh revealed a new kid in real time.

The app was broken, but the consequences weren’t abstract.
That’s dozens of children’s locations broadcast to the wrong person.
The feature that was supposed to provide control did the opposite.

Schools are part of the problem, too

Your child’s school likely collects and stores sensitive data—without strong protections or meaningful consent.

  • In Virginia, thousands of student records were accidentally made public
  • In Seattle, a mental health survey led to deeply personal data being stored in unsecured systems

And it’s not just accidents.

A 2015 study investigated “K–12 data broker” marketplaces that trade in everything from ethnicity and affluence to personality traits and reproductive health status.
Some companies offer data on children as young as two.
Others admit they’ve sold lists of 14- and 15-year-old girls for “family planning services.”

Surveillance disguised as protection

Let’s be clear: the internet is a minefield, filled with ways children can be tracked, profiled, or preyed upon. Protecting them is more important than ever.

One category of tools that’s exploded in popularity is the parental control app—software that lets you see everything happening on your child’s device:
The messages they send. The photos they take. The websites they visit.

The intention might be good. But the execution is often disastrous.

Most of these apps are not end-to-end encrypted, meaning:

  • Faceless companies gain full access to your child’s messages, photos, and GPS
  • They operate in stealth mode, functionally indistinguishable from spyware
  • And they rarely protect that data with strong security

Again, how much do you know about these companies?
And even if you trust them, how well are they protecting this data from everyone else?

The “KidSecurity” app left 300 million records exposed, including real-time child locations and fragments of parent credit cards.
The “mSpy” app leaked private messages and movement histories in multiple breaches.

When you install one of these apps, you’re not just gaining access to your child’s world.
So is the company that built it… and everyone they fail to protect it from.

What these breaches really teach us

Here’s the takeaway from all these hacks and security failures:

Tech fails.

We don’t expect it to be perfect.
But when the stakes are this high — when we’re talking about the private lives of our children — we should be mindful of a few things:

1) Maybe companies shouldn’t be collecting so much information if they can’t properly protect it.
2) Maybe we shouldn’t be so quick to hand that information over in the first place.

When the data involves our kids, the margin for error disappears.

Your old phone might still be spying

Finally, let’s talk about hand-me-downs.

When kids get their first phone, it’s often filled with tracking, sharing, and background data collection from years of use. What you’re really passing on may be a lifetime of surveillance baked into the settings.

  • App permissions often remain intact
  • Advertising IDs stay tied to previous behavior
  • Pre-installed tracking software may still be active

The moment it connects to Wi-Fi, that “starter phone” might begin broadcasting location data and device identifiers — linked to both your past and your child’s present.

Don’t opt them in by default: 8 ways to push back

So how do we protect children in the digital age?

You don’t need to abandon technology. But you do need to understand what it’s doing, and make conscious choices about how much of your child’s life you expose.

Here are 8 tips:

1: Stop oversharing
Data brokers don’t wait for your kid to grow up. They’re already building the file.
Reconsider publicly posting their photos, location, and milestones. You’re building a permanent, searchable, biometric record of your child—without their consent.
If you want to share with friends or family, do it privately through tools like Signal stories or Ente photo sharing.

2: Avoid spyware
Sometimes the best way to protect your child is to foster a relationship of trust, and educate them about the dangers.
If monitoring is essential, use self-hosted tools. Don’t give third parties backend access to your child’s life.

3: Teach consent
Make digital consent a part of your parenting. Help your child understand their identity—and that it belongs to them.

4: Use aliases and VoIP numbers
Don’t link their real identity across platforms. Compartmentalization is protection.

5: Audit tech
Reset hand-me-down devices. Remove unnecessary apps. Disable default permissions.

6: Limit permissions
If an app asks for mic or camera access and doesn’t need it—deny it. Always audit.

7: Set boundaries with family
Ask relatives not to post about your child. You’re not overreacting—you’re defending someone who can’t yet opt in or out.

8: Ask hard questions
Ask your school how data is collected, stored, and shared. Push back on invasive platforms. Speak up when things don’t feel right.

Let Them Write Their Own Story

We’re not saying throw out your devices.
We’re saying understand what they really do.

This isn’t about fear. It’s about safety. It’s about giving your child the freedom to grow up and explore ideas without every version of themselves being permanently archived, and without being boxed in by a digital record they never chose to create.

Our job is to protect that freedom.
To give them the chance to write their own story.

Privacy is protection.
It’s autonomy.
It’s dignity.

And in a world where data compounds, links, and lives forever, every choice you make today shapes the freedom your child has tomorrow.

 

Yours in privacy,
Naomi

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

AI surveillance in Swedish workplaces sparks outrage

Mass surveillance

Published 4 June 2025
– By Editorial Staff
In practice, it is possible to analyze not only employees' productivity - but also their facial expressions, voices and emotions.
2 minute read

The rapid development of artificial intelligence has not only brought advantages – it has also created new opportunities for mass surveillance, both in society at large and in the workplace.

Even today, unscrupulous employers use AI to monitor and map every second of their employees’ working day in real time – a development that former Social Democratic politician Kari Parman warns against and calls for decisive action to combat.

In an opinion piece in the Stampen-owned newspaper GP, he argues that AI-based surveillance of employees poses a threat to staff privacy and calls on the trade union movement to take action against this development.

Parman paints a bleak picture of how AI is used to monitor employees in Swedish workplaces, where technology analyzes everything from voices and facial expressions to productivity and movement patterns – often without the employees’ knowledge or consent.

It’s a totalitarian control system – in capitalist packaging”, he writes, continuing:

There is something deeply disturbing about the idea that algorithms will analyze our voices, our facial expressions, our productivity – second by second – while we work”.

“It’s about power and control”

According to Parman, there is a significant risk that people in digital capitalism will be reduced to mere data points, giving employers disproportionate power over their employees.

He sees AI surveillance as more than just a technical issue and warns that this development undermines the Swedish model, which is based on balance and respect between employers and employees.

It’s about power. About control. About squeezing every last ounce of ‘efficiency’ out of people as if we were batteries”.

If trade unions fail to act, Parman believes, they risk becoming irrelevant in a working life where algorithms are taking over more and more of the decision-making.

To stop this trend, he lists several concrete demands. He wants to see a ban on AI-based individual surveillance in the workplace and urges unions to introduce conditions in collective agreements to review and approve new technology.

Kari Parman previously represented the Social Democrats in Gnosjö. Photo: Kari Parman/FB

“Reduced to an algorithm’s margin of error”

He also calls for training for safety representatives and members, as well as political regulations from the state.

No algorithm should have the right to analyze our performance, movements, or feelings”, he declares.

Parman emphasizes that AI surveillance not only threatens privacy but also creates a “psychological iron cage” where employees constantly feel watched, blurring the line between work and private life.

At the end of the article, the Social Democrat calls on the trade union movement to take responsibility and lead the resistance against the misuse of AI in the workplace.

He sees it as a crucial issue for the future of working life and human dignity at work.

If we don’t stand up now, we will be alone when it is our turn to be reduced to an algorithm’s margin of error”, he concludes.

AI agents succumb to peer pressure

Published 2 June 2025
– By Editorial Staff
Even marginal variations in training data can cause significant differences in how language models behave in group interactions.
3 minute read

A new study shows that social AI agents, despite being programmed to act independently, quickly begin to mimic each other and succumb to peer pressure.

Instead of making their own decisions, they begin to uncritically adapt their responses to the herd even without any common control or plan.

– Even if they are programmed for something completely different, they can start coordinating their behavior just by reacting to each other, says Andrea Baronchelli, professor of complex systems at St George’s University of London.

An AI agent is a system that can perform tasks autonomously, often using a language model such as ChatGPT. In the study, the researchers investigated how such agents behave in groups.

And the results are surprising: even without an overall plan or insight, the agents began to influence each other – and in the end, almost the entire group gave the same answer.

– It’s easy to test a language model and think: this works. But when you release it together with others, new behaviors emerge, Baronchelli explains.

“A small minority could tip the whole system”

The researchers also studied what happens when a minority of agents stick to a deviant answer. Slowly but surely, the other agents began to change their minds. When enough had changed their minds – a point known as critical mass – the new answer spread like a wave through the entire group. The phenomenon is similar to how social movements or revolutions can arise in human societies.

It was unexpected that such a small minority could tip the whole system. This is not a planned collaboration but a pattern that emerges spontaneously, the researcher told Swedish public television SVT.

AI agents are already used today on social media, for example in comment fields, automatic responses, or texts that mimic human language. But when one agent is influenced by another, which in turn has been influenced by a third, a chain reaction occurs. This can lead to false information spreading quickly and on a large scale.

– We often trust repetition. But in these systems, we don’t know who said what first. It becomes like an echo between models, says Anders Sandberg, a computer scientist at the Institute for Future Studies.

Lack of transparency

Small differences in how a language model is trained can lead to large variations in behavior when the models interact in a group. Predicting and preventing unwanted effects requires an overview of all possible scenarios something that is virtually impossible in practice. At the same time, it is difficult to hold anyone accountable: AI agents spread extremely quickly, their origins are often difficult to trace, and there is limited insight into how they are developed.

It is the companies themselves that decide what they want to show. When the technology is closed and commercial, it becomes impossible to understand the effects – and even more difficult to defend against them, Sandberg notes.

The study also emphasizes the importance of understanding how AI agents behave as a collective something that is often overlooked in technical and ethical discussions about AI.

– The collective aspect is often missing in today’s AI thinking. It’s time to take it seriously, urges Andrea Baronchelli.

Apple sued over iPhone eavesdropping – users may get payouts

Published 1 June 2025
– By Editorial Staff
Apple has denied any wrongdoing - but finally agreed to pay $95 million in a settlement.
3 minute read

Apple’s voice assistant Siri was activated without commands and recorded sensitive conversations – recordings that were also allegedly shared with other companies.

Now users in the US can get compensation – even if it’s relatively small amounts.

Technology giant Apple was caught in the crossfire after it was discovered that its voice assistant, Siri, recorded private conversations without users’ knowledge. The company has agreed to pay $95 million in a settlement reached in December last year, following a class action lawsuit alleging privacy violations.

The lawsuit was filed in 2021 by California resident Fumiko Lopez along with other Apple users. They stated that Siri-enabled devices recorded conversations without users first intentionally activating the voice assistant by saying “Hello, Siri” or pressing the side button.

According to the allegations, the recordings were not only used to improve Siri, but were also shared with third-party contractors and other actors – without users’ consent. It is also alleged that the information was used for targeted advertising, in violation of both US privacy laws and Apple’s own privacy policy.

However, Apple has consistently denied the allegations and claims that its actions were neither “wrong nor illegal”. However, paying such a large sum to avoid further litigation has raised questions about what may have been hidden from the public.

Users can claim compensation

Individuals who owned a Siri-enabled Apple product – such as an iPhone, iPad, Apple Watch, MacBook, iMac, HomePod, iPod touch or Apple TV – between September 17, 2014 and December 31, 2024, and who live in the United States or a U.S. territory, may now be entitled to compensation.

However, to qualify, one must certify that Siri was inadvertently activated during a call that was intended to be private or confidential.

The reimbursement applies to up to five devices, with a cap of $20 per device – totaling up to $100 per person. The exact amount per user will be determined once all claims have been processed.

Applications must be submitted by July 2, 2025, and those eligible may have already received an email or physical letter with an identification code and confirmation code. Those who haven’t received anything but still think they qualify can instead apply for reimbursement via the settlement’s website – if you provide the model and serial number of your devices.

How to protect yourself from future interception

Users who want to strengthen their privacy can limit Siri’s access themselves in the settings:

  • Turn off Improve Siri: Go to Settings > Privacy & Security > Analytics & Improvements and disable Improve Siri & Dictation.
  • Delete Siri history: Go to Settings > Siri > Siri & Dictation History and select Delete Siri & Dictation History.
  • Turn off Siri completely: Go to Settings > Siri > Listen for “Hey Siri”, turn it off, then go to Settings > General > Keyboard and disable Enable Dictation.

Apple describes more privacy settings on its website, such as how to restrict Siri’s access to location data or third-party apps. But in the wake of the scandal, critics say that you shouldn’t blindly trust companies’ promises of data protection – and that the only way to truly protect your privacy is to take matters into your own hands.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.