Why the Law Often Doesnt Recognize Privacy and Data Security Harms

In my previous post on privacy/security harms, I explained how the law is struggling to deal with privacy and data security harms. In this post, I will explore why.

The Collective Harm Problem

One of the challenges with data harms is that they are often created by the aggregation of many dispersed actors over a long period of time. They are akin to a form of pollution where each particular infraction might, in and of itself, not cause much harm, but collectively, the infractions do create harm.

In a recent article, Privacy Self-Management and the Consent Dilemma, 126 Harvard Law Review 1880 (2013), I likened many privacy harms to bee stings. One bee sting might not do a lot of damage, but thousands of stings can be lethal.

In the movie, Office Space, three friends create a virus to deduct a fraction of a cent on every financial transaction made from their employer’s bank account, with the proceeds being deposited into their own account. The deductions would be so small that nobody would notice them, but over time, they would result in a huge windfall to the schemers. That’s the power of adding up a lot of small things.

The problem is that our legal system struggles when it comes to redressing harms created to one person by a multitude of wrongdoers. A few actors can readily be sued under joint and several liability, but suing thousands is much harder. The law has better mechanisms for when many people are harmed by one wrongdoer, such as class actions, but even here the law has difficulties, as only occasionally do class members get much of benefit out of these cases.

The Multiplier Problem

The flip side of collective harm is what I call the “multiplier problem,” which affects the companies that cause privacy and data security problems. A company might lose personal data, and these days, even a small company can have data on tens of millions of people. Judges are reluctant to recognize harm because it might mean bankrupting a company just to give each person a very tiny amount of compensation.

Today, organizations have data on so many people that when there’s a leak, millions could be affected, and even a small amount of damages for each person might add up to insanely high liability.

Generally, we make those who cause wide-scale harm pay for it. If a company builds a dam and it bursts and floods a town, that company must pay. But with a data leak, courts are saying that companies should be off the hook. In essence, they get to use data on millions of people without having to worry about the harm they might cause. This seems quite unfair.

It takes a big entity to build a dam, but a person in a garage can create an app that gathers data on vast numbers of people. Do we want to put a company out of business for a data breach that only causes people a minor harm? When each case is viewed in isolation, it seems quite harsh to annihilate a company for causing tiny harms to many people. Courts say, in the words of the song my 3-year old son will not stop singing: “Let it go.” But that still leaves the collective harm problem. If we let it go all the time, then we have death by a thousand bee stings (or cuts, whichever you prefer).

The Harm of Leaked or Disclosed Data Depends Upon Context

People often make broad statements that the disclosure of certain data will not be harmful because it is innocuous, but such statements are inaccurate because so much depends upon context.

If you’re on a list of people who prefer Coke to Pepsi, and a company sells that list to another company, are you really harmed by this information? Most people wouldn’t view a preference for Coke versus Pepsi to matter all that much. Suppose the other company starts sending you unsolicited emails based on this information. You don’t like getting these emails, so you unsubscribe from the list. Are you really harmed by this?

But suppose you’re the CEO of Pepsi and the data that you like Coke is leaked to the media. This causes you great embarrassment, and you are forced to resign as CEO. That might really sting (though I’m certain you would have negotiated a great severance package).

Another example: For many people, their home address is innocuous information. But if you’re an abuse victim trying to hide from a dangerous ex-spouse who is stalking you, then the privacy of your home address might be a matter of life or death.

Moreover, the harmfulness of information depends upon the practices of others. Consider the Social Security number (SSN). As I discussed in a previous post, the reason why SSNs are so harmful if disclosed is because organizations use them to authenticate identity – they use them as akin to passwords. It is this misuse of SSNs by organizations that makes SSNs harmful. If SSNs were never misused in this way, leaking or disclosing them wouldn’t cause people harm.

The Uncertain Future

Problems of Proof

Another difficulty with harm is that the harm from privacy and data security violations may occur long after the violation. If data was leaked, an identity theft might occur years later, and a concrete injury might not materialize until after the statute of limitations has run.

Moreover, it is very difficult to trace a particular identity theft or fraud to any one particular data breach. This is because people’s information might be compromised in multiple breaches and in many different ways.

A big complicating factor is that very few identity theft cases result in much of an investigation, trial, or conviction. The facts never get developed sufficiently to figure out where the thief got the data. For example, in one estimate, fewer than 1 in 700 instances of identity theft result in a conviction.

Why are identity theft cases so neglected? Identity theft can occur outside of the locality where a victim lives, and local police aren’t going to fly to some remote island in the Pacific where the identity thief might be living. Police might be less inclined to go after an identity thief if the thief’s victims are not in the police’s jurisdiction. Cases can take a lot of resources, and police have other crimes they want to focus on more.

Without the thief being caught and fessing up about how he or she got the data, it will likely be very hard to link up identity theft or fraud to any one particular data breach.

The Aggregation Effect

With privacy, the full consequences depend not upon isolated pieces of data but upon the aggregation of data and how it is used. This might occur years in the future, and thus it is hard to measure the harm today.

Suppose at Time 1 you visit a website and it gathers some personal data in violation of its privacy policy. You are upset that it gathered data it shouldn’t have, but nothing had has happened to you yet. At Time 2, ten years from now, that data that was gathered is combined with a different set of data, and the result of that combination is that you’re denied a loan or placed on the No Fly List. The harm at Time 1 is different from the harm at Time 2. If we know about the use of the data at Time 1, then we could more appropriately assess the harm from the collection of the data. Without this knowledge at Time 1, it is hard to assess the harm.

Harm is Hard to Handle

Privacy harms are cumulative and collective, making them very difficult to pin down and link to any one particular wrongdoer. They are understandably very hard for our existing legal system to handle.


The author thanks SafeGov for its support.

Posted in: Cybercrime, Cyberlaw, Features, Privacy