The Secret to Protecting Consumers in the Digital Age
Privacy is more important than ever. It’s also a nearly meaningless term for making coherent policy. How we live in our digital fishbowls is remarkably discordant from the posture taken by strict privacy adherents. The White House’s new consumer data privacy recommendation offers a chance then to examine the new technologies in store, what consumers and companies really see as being at stake, and a new framework for protecting what we hold dear in our digital lives.
Consider IBM’s recent announcement that it’s pairing up with Apple, Medtronic, and Johnson & Johnson to use its Watson artificial intelligence system to give owners of fitness trackers and smartphones fresh insights on their personal health data. Users want lifesaving insights and already offer up varying degrees of information to these companies, so by mashing up these data stores they increase their chances for serendipitous insights. This agreement, according to IBM senior vice president John Kelly, “presents an unprecedented opportunity to transform the ways in which we manage our health.”
Data fuels the information economy, offering benefits to consumers as well as more controls over their data. Yet health data is incredibly personal. The last thing you or I want is to be kept out of the loop in how our data are being used. Even more so we want to avoid direct harm. And laced throughout is a sense of fear. Data has become a four-letter word in the popular mind—something whose
In response, the White House is calling on companies to develop codes of conduct for handling consumer information and charging the Federal Trade Commission (FTC) with overseeing them. Among the standards called for by the Administration are rigorous notice and consent efforts for how data are collected, used, and shared. If passed into law, what the White House is proposing would preempt state laws and frame the debate on privacy for years to come.
On the plus side, these principles seem sensible. They call for clear “privacy and security practices”; who doesn’t want industry best practices and clarity for consumers? And as Adam Thierer of the Mercatus Center observed, they preempt a “crazy-quilt of overlapping digital privacy and security laws.”
Yet the devil’s in the details. After calling for simple consumer notices, the White House demands seven very specific components. And while asking firms to specify how data will be collected, used, and retained, the Administration fails to consider how anyone is supposed to know all of these uses in advance. Unless, that is, companies were to adopt a policy of data minimization, which is the equivalent of Frodo’s vision for the One Ring: to guard and destroy it, lest they fall prey to its power. This reflects a pattern seen throughout the White House’s advisements on privacy: offering sensible ideas that in practice require adopting a far more worrisome set of policy frameworks.
In this new data regime, every device and service plugged into the Internet falls prey to regulation. And what compliance regime must be adopted? Only the same one that brought us interminable consent forms choked in legalese. The same forms that would require taking a month off of work every year to read all the way through. As Fred Cate of Indiana University observes, “Nowadays individuals are often presented with long and complex privacy notices routinely written by lawyers for lawyers, and are then requested to either ‘consent’ or abandon the use of the desired service.” The result is merely the illusion of privacy and control.
So let’s play a game for a second. Suppose someone were to hit delete on the White House’s plan and start over. What would be in a new privacy plan? It would likely begin with some key priorities, namely: protecting consumers against harm, encouraging innovation, and clarifying roles.
The currency of the data market is trust, not privacy. Consumers need to know that engaging in transactions won’t make them susceptible to harm or abuse. Responsibility is placed on the users of the data to give consumers appropriate controls. As with insurance, the goal is to have a single party cover liabilities for reasonably foreseeable harms, and to mitigate risks with the actual use of data. When
Companies are trustees of consumer data, as Ben Wittes and Wells Bennett have explained, which brings with it obligations to serve as good stewards. These considerations are fluid and dynamic, as is the market, but a few points seem clear: providing user safety, disclosing uses, offering consumers control, and keeping their promises. These obligations offer more specific protections than a blanket protection of privacy, which means different things to different people, and are easier to enforce. (They may also be better overseen by data ethicists than lawyers.)
Such an approach would be paired with more education of consumers (rather than dumping them with all of the responsibilities of endless consent), a clear obligation on companies (rather than having them be assumed by regulators), and a specific role for regulators (offering finely tailored responses to clear harms as they arise). That is a simpler and scalable response to a complicated and fast-changing marketplace.
Last, but not least, there needs to be a coherent engagement with the costs and benefits of our big data era. We’ve become expert at quantifying risks, which are often concentrated, but struggle to weave together the immense and scattered benefits accruing to individual, corporate, and national interests. Policymakers need the necessary tools to make full cost-benefit analyses of the all-important questions around privacy and big data.
Many of us are bound to struggle over the coming years to balance our own costs and benefits of the quantified lives we’ve willingly chosen. The moment we see privacy as the beginning and means of debate rather than its goal, we can begin to focus instead on how to embrace big data’s benefits while yelling “stop” to its concrete harms.