‘Wearables’ In Court: How Your Electronic Data Becomes Evidence
Over the past two decades, a large body of case law has developed concerning the admissibility of computer data–or, as it’s called in legal parlance, “electronically stored information“–as evidence. Records of e-mails and text messages have become a driving force behind many court decisions in the United States. But American courts have not yet had to deal with one form of data that is growing increasingly common: information collected by ‘wearable’ technology. A recent case in the Canadian courts has stimulated public discussion around the topic–and raised more questions about electronic surveillance and common law jurisprudence.
A young woman in Calgary, Alberta was working as a personal trainer when she was injured in a car accident; she filed a personal injury lawsuit, and was tasked with proving that she should be awarded compensation. Typically, a physician is asked to perform a physical examination and testify in court–but Richard Hu, a Calgary-based surgeon who was often called as an expert witness in these cases, devised a novel legal tactic. Hu is the founder and CEO of Vivametrica, a firm that aggregates information collected from thousands of activity trackers to do population-level behavioral analysis. He approached the lawyers representing the personal trainer, and suggested that they ask their client to wear a Fitbit for several months, and compare the data to the benchmarks set by Vivametrica’s algorithms. If her recorded activity was below the ‘average’ for her demographic, her injury claim would be justified.
It was a brilliant marketing move that generated a lot of press coverage for Hu’s company–but it was also a cunning legal maneuver. Whereas in the past the defendant’s attorney may have hired a private investigator to follow the plaintiff and collect evidence contradicting her claim (or ‘follow’ their social media presence, as in one medical malpractice suit that was settled quickly after the patient uploaded a photo of themselves at a bull-riding event), the plaintiff was now placing herself under continuous surveillance. “Prior to this, all we really had to put forth to the insurance companies was what our client was saying to her doctors,” one of the attorneys representing the personal trainer told Maclean’s. “If we have actual evidence to show that her activity levels are lower for somebody her age, then that backs up what she’s saying. It’s all about evidence, at the end of the day.”
Vivametrica and other “population health management” firms, like Welltok and Staywell, are part of a massive (and growing) health surveillance industry. It is now common understanding that public health care costs are rising rapidly around the world–at the same time, the United States alone will face a shortage of nearly 100,000 physicians by 2025. But aside from the macro-level problems of public health, there is a lot of money at stake: venture capitalists poured seven billion dollars into health technology last year, as the mantra “we need algorithms, not doctors” begins to take hold in Silicon Valley.
In a previous post, we discussed in some detail the complex relationship between insurance companies and health technology manufacturers–one of the major issues we investigated was the incorporation of fitness trackers into ‘corporate wellness’ programs, to monitor employees’ health and determine insurance rates. A significant question emerged: Who defines what positive (or ‘healthy’) behavior is? What’s at stake when “black box” hardware and “trade secret” algorithms become our functional ‘representatives’ in the eyes of courts and corporations? It’s an abstract problem, and we’re only beginning to scratch the surface here.
The Calgary lawsuit is an example of ‘wearable’-generated data being used to support a claimant in an injury case–but a related lawsuit in Nova Scotia hints at some hidden costs and dangers of our technological ‘progress.’ In December 2005, Peter Laushway, a Canadian businessman who sold health products on commission over the internet, suffered serious injuries in a motor vehicle accident. Laushway filed an insurance claim for lost income: he argued that the accident prevented him from completing sedentary tasks — like sitting for long periods of time in front of his computer — that were necessary to do his job. The insurance company demanded that he prove his claim by turning over his computer hard drive to a forensic expert for analysis, and their request was approved by the chambers judge.
Laushway filed an appeal, arguing that the request was a breach of his privacy; the appeals court in Nova Scotia was tasked with determining how to balance Laushway’s ‘reasonable expectation‘ of privacy with his duty to provide the court with relevant information about his ability to work. Eight years later, the court finally reached a decision: “There was a clear, direct link between the hours he said he spent at his computer, and his income as a salesman. The information was relevant, and the respondents should be entitled to access that information.” So long as it was restricted to metadata relevant to the facts of the case — how much time Laushway spent on his computer, rather than the actual contents of his hard drive — the insurance company was allowed to see Laushway’s data.
The parallels between the Calgary and Nova Scotia cases are clear — they both involve attempts to prove to an insurance company that one’s health and ability to earn income have been diminished. But whereas the woman in Calgary volunteered to wear an activity tracker with the specific intention of collecting evidence to support her claim, Laushway’s data was taken from him without his consent, under the effective coercion of a court order. Laushway’s case raises the possibility of a court — or any government agency — filing a subpoena compelling a ‘wearable’ manufacturer to release a user’s data, without that user’s knowledge or permission. Furthermore, it illustrates the scope of the problem of passively-collected data: ‘wearables’ are just one pole of the “Internet of Things.” What happens when all your stuff is connected to the internet–and your car (or your shirt, or shoes, or television, let alone your phone) is snitching on you?
Although we’ve only discussed civil litigation so far, the scope of the problem far exceeds insurance claims and personal injury lawsuits. The ‘wearables’ cases illustrate the fuzzy boundaries between corporate surveillance (data collection from consumer electronics) and the more general regime of government surveillance. The privacy policies of ‘wearable’ devices all anticipate the possibility that their information will be requested by courts; for example, Fitbit’s terms of service state that the company will surrender a user’s data if “disclosure is reasonably necessary to comply with law, regulation, valid legal process (e.g. subpoenas or warrants served on us), or governmental or regulatory request.” These kinds of phrases are commonplace in nearly every terms of service you have agreed to–and its often up to the internet companies’ attorneys to decide what constitutes ‘reasonably necessary’ compliance.
In the United States, the right to privacy of electronic data is limited by the so-called “third-party doctrine.” In 1976, Michael Lee Smith robbed a woman in Baltimore, and began making threatening phone calls to her home. Later, the police spotted a man matching the robber’s profile driving near the scene of the crime; they took down his license plate number and learned Smith’s identity. Without a warrant, the police asked the phone company to install a ‘pen register‘ to record the numbers Smith was dialing; once they discovered Smith was calling the victim, they got a warrant to search his home and arrested him for the robbery. In 1979, the Supreme Court ruled that the police did not need a warrant to install the pen register: “This Court consistently has held that a person has no legitimate expectation of privacy in information he voluntarily turns over to third parties.”
The Electronic Communications Privacy Act (ECPA) regulates how companies must respond to government requests for electronic data. The ECPA was passed in 1986 and, like much of the statutory law around electronic privacy, it is poorly equipped to handle the realities of today’s internet. A clear case of the ECPA’s shortcomings is the process for filing a subpoena. In most jurisdictions, there is no requirement for a judge to review a subpoena before it is issued — even the SEC or the IRS can make these requests, and they often do. Under the ECPA, any “communications” held by a third-party for more than 180 days are considered “abandoned” by the original owner. The Justice Department has argued that an email or text message, once opened by its recipient, can be obtained with a subpoena alone–these arguments have been rejected by some federal courts, but the issue remains unresolved in other parts of the country.
To a certain extent, it’s up to each company’s discretion how to respond to requests under the ECPA; for example, Google’s policy is to require a search warrant for any requests pertaining to the actual content of a user’s data. For subpoenas targeted at Gmail accounts, Google will restrict the surrendered data to subscriber information (e.g., name, associated email addresses, phone number, etc.) and a list of IP addresses indicating where a user has logged in (with their associated timestamps). With a court order (which requires proof that the requested information is relevant to a criminal investigation), Google will provide more detailed metadata–like a list of a user’s emails, with all content stripped except their IP addresses, timestamps, and “to” and “from” fields. Only once served with a warrant (where a judge has established probable cause) will Google turn over a user’s search history, Gmail messages, documents, photos, and so on.
We only know about Google’s practices because their legal team produces a detailed “Transparency Report” every year, something the vast majority of internet companies don’t (or can’t) do. And in spite of its considerable legal resources, Google doesn’t always succeed in defying the courts: last year, after Google refused to hand over a man’s emails to his (former) employer without the man’s explicit consent, a California judge ruled that courts can order a litigant to consent, allowing the court to obtain those emails using a subpoena alone. In another important case (still ongoing) Microsoft is mounting its own challenge to the Justice Department’s attempts to obtain users’ emails without warrants. Unfortunately, in many cases, the mere threat of “contempt of court” sanctions is enough to secure compliance from internet companies–even if a subpoena has been filed improperly or illegally.
And we still haven’t even mentioned the Foreign Intelligence Surveillance Courts. The Foreign Intelligence Surveillance Act was passed in 1978, in response to revelations that the executive branch had been spying on political and activist groups. It created a court to oversee the surveillance activities of executive agencies; today, the court is effectively a ‘rubber stamp‘ for those agencies. If you have been profiled as a suspect in a national security investigation — for example, by encrypting your emails, searching for “suspicious stuff,” attending a protest, or practicing Islam — your electronic information might be obtained from an internet company, cloned, and stored in a government data center somewhere, under the same legal precedents outlined above. And you may never learn who’s been looking at your information–because FISA requests are secret, and compliance is secured through ‘National Security Letters‘ that prohibit the internet companies from talking.
There are mounting legal challenges to the U.S. government’s dragnet surveillance programs, with mixed results. In one case, a federal judge ruled that collection of telephone metadata en masse was constitutional according to the “third-party doctrine”; in another case, a different judge reached the opposite conclusion. Thanks to Edward Snowden, the attitudes of big internet companies toward government surveillance appear to be changing (if only because their cooperation with the government has been really bad PR). Google and other major internet companies have put their resources behind the Digital Due Process Coalition, which seeks updates to the ECPA — including a clear search warrant requirement for all private communications, documents, and location data, and some protections against certain ‘bulk’ information requests. Fortunately, there have also been some renewed Congressional efforts to revise large portions of the bill.
There is a serious need for transparency in your data — what’s being collected, how it’s collected, and how it’s analyzed. With advances in technology, and the massive popularity of television dramas like CSI, jurors have come to place an inordinate amount of trust in the testimony of ‘forensic analysts.’ But the government’s techniques of auditing and analyzing evidence are no less questionable than their methods of obtaining it — earlier this year, the FBI acknowledged that nearly every examiner in their forensic unit had “overstated” forensic matches in ways that favored prosecutors, in almost every trial over a two-decade period. That means hundreds of potentially innocent people have been put in jail during that time (though in the United States that’s nothing new). Current sensor technology is notoriously unreliable, though it is improving quickly — who will see the data, and who will decide how it is interpreted? How will we ensure that electronic evidence is authentic, and that methods of analysis are reliable?
Fight for the Future
We are not far from a future in which a constant stream of your biometric information is being recorded and uploaded to the internet: Apple wants its watch to monitor diabetics’ blood sugar levels throughout the day, Google plans to release medical-grade consumer devices that track things like skin temperature and light exposure, future “smart homes” may passively analyze your breathing and heart rate — all this, on top of the enormous amount of information your phone’s sensors have already gathered about you. And although these information streams are intimately tied to your ‘health,’ data collected by consumer electronics are not considered ‘healthcare information’ under the Health Insurance Portability and Accountability Act, and therefore don’t receive the same protections as your medical records. It’s not difficult to imagine why an insurance agent, loan officer, criminal prosecutor, or potential employer (to name a few) would be interested in this information.
Image by Jack Ohman/Sherbit