There’s a widespread misconception about whether or not federal law protects your privacy. It doesn’t, at least not explicitly. Congress has managed to squander a decade’s worth of bipartisan agreement about the internet’s data problems. In the absence of legislation, one group of regulators recently stepped in to fill the void. It’s a ragtag group of government cowboys that calls itself the Federal Trade Commission.
Over the past year, the FTC picked up the few meager laws on the books that have anything to do with privacy and repackaged them into a way to address big data’s worst offenders. Through innovative legal arguments and landmark settlements, the FTC is rewriting the rules of the internet — just in time to usher in a platform shift as AI and other technologies spark a new era of the web.
The Federal Trade Commission Act only gives the agency the authority to regulate “unfair or deceptive” business practices. For years, privacy experts assumed that meant consumers were out of luck: as long as companies weren’t telling outright lies, they were free to do as they pleased with your data. The FTC reached a $5 billion privacy sentiment with Facebook in 2018, but the case hinged on ways the company misled users — rather than allegations that the unpleasant ways Facebook used data were inherently unlawful.
But under the leadership of Lina Khan, the Biden-appointed FTC chairperson, the commission has taken up data misconduct with unprecedented vigor.
The FTC does have some rule making authority, but it’s a slow, arduous process. In the meantime, it is changing tech policy by stretching existing regulations to places no one believed they could go.
Most significant in this novel legal offensive has been a case against GoodRX, a prescription medication coupon service. Contrary to popular belief, the Health Information Portability and Accountability Act (HIPAA) generally doesn’t apply to anyone other than doctors, insurance companies, and their business associates. But based on an investigation by this reporter that found GoodRX shared user’s prescription data with Google, Facebook, and other companies that work in advertising, the FTC adopted a rule that requires health companies to disclose data breaches. The FTC argued GoodRX broke the Health Breach Notification Rule by failing to disclose its data sharing practices, setting a precedent that extends legal protections to medical data for the very first time.
The FTC has reached several other groundbreaking settlements in the past year, such as a case against Fortnite maker Epic Games. The Fortnite case marked the government’s first major intervention in the realm of “dark patterns,” a term for intentionally confusing website and app designs that trick consumers. Epic Games agreed to a half-billion-dollar fine. Other recent landmark cases saw the FTC redoubling kids’ privacy protections and cracking the whip on Amazon for significant privacy violations with its Alexa smart speakers and Ring smart doorbells.
Ronald Reagan once said the most frightening words in the English language are, “I’m from the government, and I’m here to help.” For anyone who makes their money spying on Americans, that may be true when it comes to the FTC.
Gizmodo sat down with Samuel Levine, the Director of the FTC’s Bureau of Consumer Protection, for an extended interview on how FTC envisions its groundbreaking attack on privacy problems, its plans for the future, and an effort to build a new regulatory environment that protects consumers without stifling a rapidly shifting tech landscape.
This interview has been edited for clarity and consistency.
Thomas Germain: Sam, why don’t we start with a broad overview of what’s changing. Since the dawn of the internet, it’s felt like companies could almost do whatever they want as long as they can get you to click “I agree” on privacy policy.
Samuel Levine: We’re done preaching this fiction that the markets can self correct, or that consumers can protect themselves by reading privacy policies. For the last two decades we’ve had a regime where companies felt like they could put anything in their privacy agreements and get away with it if consumers say yes.
Big picture, the shift we’ve made as an agency is stating plainly what I think many people already knew, but hasn’t really been said by anyone in government: the notice and notice and choice regime is not working. It might have made sense two decades ago, but it does not make sense today. It’s unreasonable to put the burden on consumers to be reading hundreds of thousands of pages of privacy policies, let alone to understand them.
We’ve worked through at least half a dozen cases that include data minimization, outright prohibitions on sharing sensitive data, and other substantive protections that people didn’t think were possible two or three years ago. We’re also considering market wide rules on commercial surveillance and digital security.
TG: This privacy policy, notice and choice regime has been the status quo for a long time. In the absence of more input from Congress in terms of a federal privacy law, what’s the alternative? What does the FTC expect from companies?
SL: First, we want Congress to pass privacy legislation. We’re doing everything we can, but nothing we do is a substitute for comprehensive federal legislation. That remains our position. However, we still expect companies to add or accurately disclose how they’re handling people’s data. And if they fail to do so, we’re going to hold them accountable.
That said, what we’ve tried to do is remind the marketplace through our enforcement actions that “Deception” is not our only authority. We also have authority to prohibit and take action against “unfair” practices which are defined in our statute as practices that cause injury, that are not reasonably avoidable by consumers, and that don’t have countervailing benefits to consumers or competition. If a company’s data practices harm people, we’re prepared to take action, even if those practices are accurately disclosed. In other words, we’re not just looking at whether companies are telling the truth about how they’re using people’s data, we’re thinking about whether companies are using people’s data in a way that is likely to harm us.
TG: What exactly does the word “harm” mean here? That’s an ongoing debate with privacy problems. People say, “Sure, maybe you’re creeped out, but you’re not losing money or anything. What’s the big deal?”
SL: It depends on the context, but to be clear, we’re not only looking only at financial injury. And our statute is not only “harm” but also “likely to harm.” That’s an important distinction. I don’t want to comment on pending litigation, but for example we have practices by data brokers that can lead to stigma, discrimination, an increased risk of stalking, things of that nature. These are real risks, and we’ve taken the position that those harms are recognizable under the FTC Act, even if there is no monetary injury.
As a society we are long past the point where we buy into the idea that not losing money means you’re not going to be harmed. They’re all sorts of ways — and they’ve been well documented by you and many other journalists, as well as in our cases — ways that people are harmed by reckless data practices in a manner that can’t always be quantified in dollars and cents.
TG: If that’s the definition of harm, you could apply that logic to almost the entire data broker industry. You could imagine the FTC almost wiping the data broker business off the map entirely.
SL: That’s not the goal. The goal is to curb practices that we believe are breaking the law. One of the things you see in that industry is there are companies out there that are taking no steps to filter out sensitive data, no steps to ensure that only responsible parties can purchase sensitive data, and no steps to ensure that this data isn’t being used in ways that could harm people. You’re right that these problems are widespread. But for purposes of enforcement action, we’re looking squarely at individual companies, individual cases, and individual ways that consumers can be harmed.
TG: I’ve got a hypothetical for you, which is probably a bad word for someone who works in government. Some experts I’ve spoken to say we’re moving in a direction where AI and predictive analysis becomes so effective that we can do things like ad targeting, for example, and barely collect any data about you whatsoever. Companies are getting better at saying “we know what type of person you are, so the specifics of your behavior don’t matter.” That could leave us in a place where this problem has nothing to do with “privacy,” and it’s just an exercise in power. All the existing laws we have are pretty much about consent, rather than banning harmful practices outright. What do regulators do if that becomes a reality?
SL: I think it’s an excellent observation. I guess I would make two points, one looking back and one looking forward. Looking back, we are now living through the consequences of many years of unfettered data harvesting. And it’s true, so much has been collected, in so many ways, by so many companies, across so many devices, and in so many forms. So much so that for many of these companies, especially the largest firms, they may no longer need to collect additional data in order to target people… which, by the way, could raise some competition concerns as well. It tends to advantage incumbent firms.
The reality we find ourselves in now is directly attributable to that this h Source: Gizmodo