Monday, January 13, 2025

Polaris of Enlightenment

Ad:

Proton releases password manager

Cyber Security

Published 26 April 2023
– By Editorial Staff
Proton Pass will have a strong focus on privacy and security.

Proton has released a beta version of its new password manager Proton Pass, promising a “more complete encryption model than most other password managers”.

The company behind the encrypted email service Proton Mail is developing a new password manager that is currently in beta for Android, iOS, and browser extensions for Chrome and Brave, Sweclockers reports.

Proton has developed the manager together with developers from the company SimpleLogin, which it acquired last year. The password manager differs from others by using the crypto algorithm “bcrypt” for hashing the master password, unlike most other password managers that use “pbkdf2”. Furthermore, it claims that all stored data is full-strength encrypted.

The password manager has the same protocol as Proton Mail for authentication, which allows the user to show the server that they have the correct password without sending it in any form.

Proton Pass should also be much more secure than Lastpass, which was subject to a breach last year where it was revealed that URLs for saved login information were stored unencrypted.

“Proton Pass is unique in that it was designed from the ground up to have a strong focus on privacy and security. It therefore has a more complete encryption model than most other password managers.”, Proton writes in its blog.

Initially, only users with Lifetime and Visionary subscriptions will have access to Proton Pass, but eventually it will be free for all subscribers.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

The most PRIVATE ways to use AI

Cyber Security

AI is making life easier in ways we never thought possible – but what’s the cost to our privacy? From locally hosted models to privacy-focused platforms, there are now smarter ways to tap into AI’s potential without giving up control over personal data.

Published 14 November 2024
– By Naomi Brockwell

AI is transforming the world at an unprecedented pace, becoming an essential part of our daily lives – even if we don’t fully realize it. From productivity boosters to personalized assistance, chatbots like ChatGPT, Bard, Perplexity, and Claude are giving us abilities that once felt out of reach. AI chatbots help us generate content, write code, provide instant advice, and much more.

The double-edged sword

As amazing as these tools are, there are also privacy considerations we can’t ignore:

  • Data use for model training:
    Many chatbots use the data we provide for training, and this data often becomes deeply integrated into their systems, making it nearly impossible to remove. Even though companies like OpenAI and Google claim to anonymize data, it doesn’t take much – just a couple of unique details – for you to become easily identifiable. Once data is integrated into the model, there’s no simple way to remove it.
  • Data collection and storage:
    • Hacking and data breaches
    • Third-party data sharing
    • Government access and subpoenas

Companies store large datasets of user inputs, making these repositories tempting targets for hackers, data brokers, and law enforcement. These centralized databases are gold mines of personal information, available to anyone with the power or means to access them. Do we really want our interactions with AI tools to be stored forever, vulnerable to misuse?

Solutions for using AI privately

The good news is that embracing AI tools doesn’t have to mean sacrificing privacy.

Locally-hosted LLMs

The most private way to use an AI chatbot is to host it locally. Running models on your machine ensures that no data leaves your device, keeping your queries private and preventing them from being used for further training. In two weeks, we’ll release a tutorial on how to set this up – if it’s something you want to explore.

Privacy-focused platforms

Brave’s Leo:
Leo AI is integrated into the Brave browser, allowing users to interact with an AI chatbot without installing extra apps or extensions. Leo not only provides real-time answers and content generation but also prioritizes privacy at every step.

  • No logging: Brave does not store or retain user data.
  • Reverse proxy server: Queries are passed through a proxy that strips IP addresses, ensuring the service cannot trace interactions back to you – even when using external models like those from Anthropic.
  • Local LLM compatibility: Brave allows users to connect Leo with locally-hosted models, letting you process queries on your own device while still benefiting from Leo’s AI capabilities during browsing. No data ever leaves your machine, giving you full control over your interactions.

Venice.ai:
Venice.ai is an interface for using AI chatbots that emphasizes censorship resistance and decentralization. It provides users with access to uncensored AI models, allowing for real-time text, code, and image generation.

Think of Venice as the front-end user interface to accessing these tools – it acts as a passthrough service, ensuring no logs are kept on its servers and requiring no personal accounts or identifiers.

On the back end, Venice leverages GPU marketplaces like Akash to provide the computing power needed to run AI models. These marketplaces are like an Airbnb for GPUs – individuals rent out their hardware to process your queries. Keep in mind that because the backend providers control the machines hosting the models, they can see your prompts and make their own logging decisions. While your interactions remain anonymous through Venice’s proxy system that strips your IP address – similar to Brave’s proxies – and there is no centralized service aggregating your prompts across sessions to build a profile on you, you should be careful not to identify yourself within a session if you wish to use them privately.

It’s worth noting that neither Brave nor Venice is using your prompts for training. However, as far as logging is concerned, there is an element of trust required for both: either trusting Brave’s no-logging policy or trusting Venice’s no-logging policy as well as trusting the compute providers on the back end. The privacy for cloud-hosted AI isn’t yet perfect because truly private inference is an unsolved problem. However, as far as using cloud-based AI chatbots goes, Brave and Venice are two of the best options available.

Balancing privacy and performance

If absolute privacy is your top priority, hosting models locally is your best option. However, this comes with trade-offs – locally-hosted models may lack the processing power of those available on cloud platforms. Using Brave and Venice allows access to more advanced models while still providing privacy protections.

Privacy Best Practices for AI Chatbots

If you decide you do want to take advantage of these more powerful models on 3rd-party servers, but logging is a big concern for you, you can still use these systems comfortably, you just need to employ some best practices.
  • Local Models: Ask sensitive questions freely since all data stays on your machine.
  • Brave-Hosted Models: Brave’s no-logging policies provide high privacy, so if you trust Brave, you can comfortably use their AI tools. Personally, I have high trust in them, but you’ll have to make your own decision.
  • Third-Party Models: When using models where you’re unsure who is processing your query, avoid sharing identifiers like names or addresses. If your IP is being stripped first, and your prompts across sessions aren’t being aggregated by a single entity, you can feel comfortable asking sensitive questions that don’t directly identify you.
  • Centralized Platforms (e.g., ChatGPT): Be cautious about what you share, as these platforms build detailed profiles that may be accessible or used in unexpected ways in the future.

The power of choice in the AI era

In a world where data is often harvested by default, it’s easy to feel powerless. However, tools like Brave and Venice give us alternatives that prioritize privacy. For those willing to take it a step further, hosting models locally offers the highest level of control over personal data, and we’ll dive into this in our upcoming video tutorial in 2 weeks where we collaborated with The Hated One.

In this era of AI-driven data collection, it’s more important than ever to be thoughtful about what we share online.

Privacy doesn’t mean rejecting technology – it just means making smarter choices.

 

Yours in privacy,

Naomi Brockwell

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Should you use fingerprint unlock?

Cyber Security

Securing our smartphones is essential, but should you use a PIN, password, or biometrics like fingerprint unlock?

Published 2 November 2024
– By Naomi Brockwell

Our smartphones are more than just communication devices; they are repositories of our most sensitive information, from personal photos to banking apps. Securing them is essential. But should you use a PIN, password, or biometrics like fingerprint unlock? There are pros and cons to each. Let’s go through them so that you can decide what is the best option for your threat model.

Fingerprint unlock works by capturing and storing biometric data securely within your device. Most modern smartphones, such as those by Apple and Google, store this data in an isolated part of the device, ensuring that it never leaves your phone. This level of privacy is more robust than many realize, with Apple using the Secure Enclave and Android utilizing the Trusted Execution Environment (TEE) to encrypt and protect your fingerprint data. However, there are still trade-offs to using fingerprint unlock.

Pros of fingerprint unlock

Convenience and Privacy: Using a 16-character alphanumeric password is going to be far more secure. However, most people unlock their phones around 100 times a day. It’s just not sustainable for the average person to protect their device this way, especially given the average threat model. Alternatively, fingerprint unlock is incredibly convenient, allowing you to access your phone quickly without the hassle of entering something long and complex. It’s also more private in public settings, reducing the risk of shoulder surfing – a common tactic where thieves observe your PIN and later steal your phone. While privacy screens can help mitigate this risk, PINs can still be guessed by observing general PIN patterns. Using a password instead of a PIN and scrambling the PIN layout are other options to mitigate shoulder surfing, but these add a trade-off with convenience.

Security Level: From a security perspective, fingerprint unlock is roughly equivalent to using a 5-digit PIN. This conclusion comes from understanding the False Acceptance Rate (FAR) associated with fingerprint systems. For instance, Apple’s Touch ID (although no longer used in modern iPhones, it still provides a good gauge) boasts a FAR of 1 in 50,000, which means there’s a 1 in 50,000 chance that an unauthorized user could access your device using a similar fingerprint. Given that a 4-digit PIN has 10,000 possible combinations, and a 5-digit PIN has 100,000 possible combinations, the security offered by a fingerprint is in the same ballpark.

Photo: feri ferdinan/iStock

Threat model considerations

While fingerprint unlock is suitable for many users, it may not be the best option for everyone, depending on your threat model. Common concerns include:

  • Unlocking While Asleep: There’s a fear that someone could unlock your phone using your fingerprint while you’re asleep. This is a legitimate concern but is more of a targeted attack scenario than a common risk.
  • Fingerprint Theft: Another concern is that someone could steal your fingerprints to unlock your device. While it’s possible – by either copying fingerprints physically left on objects touched, or replicating via photos – this again is more relevant to high-target individuals.
  • Coercion by Law Enforcement: In some jurisdictions, law enforcement can compel you to unlock your phone using your fingerprint, whereas they might not be able to force you to reveal a PIN or password due to the Fifth Amendment protections. However, these legal precedents are not uniform and can vary widely depending on location.

If your threat model is higher – say, you’re concerned about targeted attacks or coercion – a long, random password is your best bet for security. However, this level of security comes with a significant convenience trade-off, as entering a 16-digit password multiple times a day can be frustrating and unsustainable. Most people are more at risk of their phone being snatched in a public place, and these targeted attacks might not justify making their phone harder to unlock in daily life. There are also hybrid approaches: Turning your phone off when you sleep or during a border crossing will revert the phone back to a password or PIN, so someone could use fingerprint unlock day-to-day and utilize stronger protections in higher-risk environments.

The role of brute-force protections

A critical factor in your decision should be your phone’s brute-force protection mechanisms. For some people, it will be less important which unlock method they use, and more important which device they choose, because different devices offer wildly different brute-forcing protections. Devices like Google’s Pixel phones are equipped with the Titan M2 chip, which includes a Weaver token mechanism. This technology adds a time delay to successive PIN attempts, protecting the device from brute-force attacks. On such devices, a random 6-digit PIN is considered sufficient for robust security. This level of protection is superior to the brute-force protections found in many other phones, including some Samsung models, which have been shown to be more vulnerable to brute-force attacks.

Conclusion

In summary, whether to use fingerprint unlock or a PIN/password depends largely on your specific needs and threat model. For most people, fingerprint unlock offers an excellent balance of security and convenience, comparable to using a 5-digit PIN. However, if you’re at a higher risk for targeted attacks, consider using a device with strong brute-force protections or opting for a longer, random PIN or password. Ultimately, the best security choice is often not the most extreme method possible, but the one that you can consistently maintain, without compromising your sanity.

 

Yours in privacy,

Naomi Brockwell

Naomi Brockwell is a privacy advocacy and professional speaker, MC, interviewer, producer, podcaster, specialising in blockchain, cryptocurrency and economics. She runs the NBTV channel on Youtube.

Justified criticism of Signal’s desktop app

Cyber Security

Published 19 July 2024
– By Karl Emil Nikka

The messaging app Signal has been in the news. It all started when Swift developer Pedro José Pereira Vieito revealed that the ChatGPT app for Mac OS saved the conversations in plain text on the user’s computer. If the user’s computer was infected with spyware, there was nothing to stop the spyware from reading historical ChatGPT conversations (see Vieito’s demonstration video on Threads).

On the Fourth of July, security duo Mysk noticed that Signal’s Mac OS app suffered from a similar problem. Signal does encrypt the locally stored messages, but the app does so to no avail. Signal saves the encryption key in plaintext next to the encrypted database. Attachments such as images and audio clips are also stored unencrypted. As with the ChatGPT app, this means that a spyware program can steal all the information.

The encrypted database and encryption key are stored next to each other. The folder is accessible to any app running on the Mac. How could such a blunder be approved by an open-source project reviewed by many experts?

Mysk on Mastodon (2024-07-07)

Mysk also discovered that access to the poorly protected data was all that was needed to clone an existing Signal installation. Mysk cloned the data from his computer into a virtual machine and immediately started receiving messages in the cloned Signal installation. Some messages arrived at the original computer while other messages arrived at the virtual machine. This suggests that the Signal infrastructure saw the original installation and its clone as the same Signal installation.

Messages were either delivered to the Mac or to the VM. The iPhone received all messages. All of the three sessions were live and valid.

Mysk on Mastodon (2024-07-05)

Worst of all, the cloned installation never showed up in the linked devices overview.

Signal didn’t warn me of the existence of the third session [that I cloned]. Moreover, Signal on the iPhone still shows one linked device. This is particularly dangerous because any malicious script can do the same to seize a session.

Mysk on Mastodon (2024-07-05)

No backdoor but poor design

Mysk beat the drum and urged his followers on Mastodon and X not to install the Mac OS app. The security duo wrote that the app was not secure. Around social media, the flaws were magnified and began to be called everything from “vulnerabilities” to “backdoors”.

The highlighted flaws are neither vulnerabilities nor backdoors. In order for the flaws to be exploited, the computer must be infected, i.e. the computer must be running spyware. If one party’s computer is infected, it is impossible, regardless of the app, to guarantee the confidentiality and accuracy of the messages being transmitted.

That said, Signal’s Mac OS app is still an example of poor security design. The Mac OS offers several security-enhancing features that the Signal app developers have (deliberately) ignored, including the ability to store the encryption key in the Keychain instead of in plaintext among the user files.

A Mac OS app will never be as secure as an iOS app, but the developers of a security-critical app like Signal should still take advantage of all the security enhancements the operating system offers.

To make matters worse, the flaws and opportunities for improvement have been known for years. When the flaws were pointed out in 2018, a Signal representative responded that Signal Desktop did not aim to offer, nor was it ever claimed to offer, encryption of the locally stored data (see Signal forum post).

It remains unclear why Signal developers prioritized features such as stories over updating the desktop app to follow best practices. Fortunately, the storm of criticism paid off and the Signal developers have set out to fix the deficiencies pointed out (see Protect database encryption key with Electron safeStorage API). Signal’s CEO, Meredith Whittaker, has also responded to the criticism in a post on Mastodon.

Recommendations

There is no doubt that Signal’s mobile app is more secure than its desktop app for Mac OS, Windows and Linux. Investigative journalists, freedom fighters, dissidents and other vulnerable groups should therefore, depending on the threat, consider avoiding the desktop app until the flaws are fixed. If these groups use the desktop app and become infected, their messages may fall into the wrong hands (although a user whose computer is infected is likely to have bigger problems on their hands).

If someone can access your filesystem, you have bigger problems to worry about.

Jurre van Bergen (Amnesty International Tech) on X (2024-07-06)

As usual, the alternatives must also be weighed. Avoiding the desktop app should not lead users to switch to less secure chat apps that do not even fully encrypt conversations. Such apps risk both leaking data as it is being transmitted and leaking data in case one of the parties to the conversation gets their computer infected. As mentioned above, no app can completely protect against the latter.

However, one lesson everyone should learn is to always disconnect a linked computer if malware is detected on it. This prevents potential spies from continuing to exploit cloned installations.

As a user, you can view and change your linked devices by opening the mobile app, clicking on your profile picture and selecting Linked Devices. Should your antivirus program detect a spyware program on your computer, you should remove your computer from the list of linked Signal devices and not put it back until it has been cleaned of malware.

 


This article is published under the CC BY 4.0 license, except for quotes and images where another photographer is indicated, from Nikka Systems.

127 Swedish government agencies hit by cyber attack

Cyber Security

Published 27 January 2024
– By Editorial Staff
Finansinspektionen and the University of Gothenburg - two of the authorities involved.

This weekend’s cyber attack on the Finnish IT services company Tietoevry affected at least 127 Swedish government agencies. All were reportedly connected to the personnel management system Primula, which was also affected by the attack.

There are concerns that personal data from the authorities may have fallen into the wrong hands in connection with the attack, and Tietoevry was also forced to shut down its Primula service during the weekend hack.

The service is used, among other things, to manage and administer personnel data for up to 127 Swedish authorities, including sensitive authorities such as the Defense Intelligence Service, the Swedish Mental Health Board and the National Treasury.

Mikael Östlund of the Swedish Psychological Defense Agency tells tax-funded Swedish Radio that he is concerned that sensitive information about the home addresses of employees and their relatives may now have gone astray.

But Daniel Ström, president of the Defense Information Court (Försvarsunderrättelsedomstolen), says he is not so worried, because it is only “personnel administration information” and the systems would not be used “if we considered this information worthy of protection”.

A large number of municipalities and regions also use Primula to manage personal data, and in recent days there have been many reports of disruptions and technical services that have stopped working.