Home

Signal’s Founder Turns His Attention to AI’s Privacy Problem

The founder of Signal has been quietly working on a fully end-to-end encrypted, open-source AI chatbot designed to keep users’ conversations secret.

In a series of blog posts, Moxie Marlinspike makes clear that while he is a fan of large language models, he’s uneasy about how little privacy most AI platforms currently provide.

Marlinspike argues that, like Signal, a chatbot’s interface should accurately reflect what’s happening under the hood. Signal looks like a private one-on-one conversation because it is one. Meanwhile, chatbots like ChatGPT and Claude feel like a safe space for intimate exchanges or a private journal, even though users’ conversations can be accessed by the company behind them and sometimes used for training.

In other words, if a chatbot feels like you’re having a private conversation, Marlinspike says it should actually work that way too. 

He says this is especially important because LLMs represent the first major tech medium that “actively invites confession.” As people chat with these systems, they end up sharing a lot about how their brain works, including thinking patterns and uncertainties. 

Marlinspike warns that this kind of info could easily be turned against users, with advertisers eventually exploiting insights about them to sell products or influence behavior.

His proposed solution is Confer, an AI chatbot that encrypts both prompts and responses so that only the user can access them. 

“Confer is designed to be a service where you can explore ideas without your own thoughts potentially conspiring against you someday; a service that breaks the feedback loop of your thoughts becoming targeted ads becoming thoughts; a service where you can learn about the world – without data brokers and future training runs learning about you instead,” wrote Marlinspike

Signal was founded in 2014 around similar principles, and its open-source encrypted messaging protocol was eventually adopted by Meta’s WhatsApp just a few years later. So, it’s possible Meta and other tech giants could eventually adopt Confer’s technology as well.

How it works 

According to Marlinspike, Confer is designed so that users’ conversations are encrypted before they ever leave their devices, similar to how Signal works.

Prompts are encrypted on a user’s computer or phone and sent to Confer’s servers in that form, then decrypted only in a secure data environment to generate a response.

Confer does this by using a mix of security tools. Instead of traditional passwords, it uses passkeys, such as Face ID, Touch ID, or a device unlock PIN on verified users’ devices, to derive encryption keys. 

When it comes time for the AI to respond, Confer uses what it calls confidential computing, where hardware-enforced isolation is used to run code in a Trusted Execution Environment (TEE).

“The host machine provides CPU, memory, and power, but cannot access the TEE’s memory or execution state,” Marlinspike explained.

With the LLM’s “thinking,” or inference, running in a confidential virtual machine, the response is then encrypted and sent back to the user.

The hardware also produces cryptographic proof, known as attestation, that allows a user’s device to verify that everything is running as it should.

All of this is done to keep users’ data secure, and not, as Marlinspike puts it, sent to “a data lake specifically designed for extracting meaning and context.”

Source: Gizmodo

Previous

Next