How Smart Devices Could Violate Your Privacy
It was a Sunday morning in November 2015 when police arrived at the home of the defendant, James Bates. Responding officers discovered his colleague’s body in the backyard, floating face up in a hot tub, revealing a black eye and swollen lip. After a witness recalled that, on the night of the murder, he heard music streaming through the defendant’s Amazon Echo smart speaker, prosecutors drafted a search warrant requesting Amazon turn over any recordings or other data that was collected during the commission of the crime. At that time, the prosecution was unaware whether the speaker had, in fact, recorded anything.
The Amazon Echo is a wireless speaker that operates as a digital assistant by recording voice commands and sending them to Amazon’s data center. The Echo device is always listening, but only activates and begins recording when it hears its “wake word;” the Echo answers to the name “Alexa.” Once Amazon computers receive the streamed data, they interpret and process the information and transmit it back to the local speaker, enabling a real-time response.
When the Echo was released in late 2014, the broader category of virtual assistants – outside of smartphones and touch screen electronics – had not yet taken off. Since then, other digital devices with voice recognition programs have entered the market, such as Google’s Home, Apple’s HomePod, and Microsoft’s Cortana.
“The benefit of embedded wireless devices is that we can make objects ‘smart’ – give them logic, sensing, and computational capabilities,” says Philip Levis, an associate professor of computer science and electrical engineering at Stanford University. “Making objects smart lets them act autonomously on our behalf – they can collect, process, and act on data to help us and improve efficiency.”
Virtual lifestyle offers easy access to information and makes day-to-day tasks seem seamless. These electronics function through sensors that collect and transfer large amounts of consumer information from the device to external-entity computers, owned by companies like Google and Amazon. It is these outside servers, legally considered to be third parties, that provide the computing power to enable real-time, personalized responses.
“I don’t think anyone who buys an Echo system is considering the Fourth Amendment,” says one expert. “Although perhaps they should be.”
Where smart technologies are concerned, the expectation of privacy extends only from the consumer to machine. Once the machine communicates with an outside server – even where data is sent to a server controlled by the product’s manufacturer – privacy is violated. Currently, law enforcement can obtain a search warrant compelling a third party to turn over data recorded by the smart device if the company can control or access the information.
“Where you have a smart device inside of the home, there is a strong incentive for a company to shift the data to the cloud” says Dr. Daniel Boneh, a computer scientist who leads the applied cryptography group at Stanford University’s Computer Science department. “Data trails generate real-time insights and analytics, which let companies better their services.”
But sending data trails to third party platforms does more than tailor smart device answers and improve speech recognition – it gives third party platforms, like Amazon, access to the underlying data. As more of our lives go online, consumers are beginning to question encryption, information collection and who should be able to use it. Currently, law enforcement can obtain a search warrant compelling a third party company to turn over data recorded from a smart device if the company can control or access the information.
Professor Joel Reidenberg, founder of Fordham University School of Law’s Center on Information Law and Policy explains that, as far as the law is concerned, consumers are effectively choosing to bring surveillance devices into their homes. “Seeking a warrant for video surveillance taken in proximity to a murder wouldn’t be considered a fishing expedition. This is no different.”
In the Bates case, Amazon satisfied a portion of the warrant, providing only the defendant’s account information and purchase history. Amazon declined to turn over additional information from its server, including any audio recordings, claiming that the remainder of the request was over-broad. On February 17th, 2017, Amazon filed a motion to void the search warrant, arguing that, “Given the important First Amendment and privacy implications at stake, the warrant should be quashed unless the Court finds that the State has met its heightened burden for compelled production of such materials.”
Kathleen Zellner, defense counsel for James Bates, subsequently filed a motion stating that her client would acquiesce to the prosecution’s request and voluntarily hand over the sought recordings. The scheduled discovery hearing on Amazon’s motion was canceled, as it was no longer unnecessary.
There are certainly benefits – and perhaps necessities – of the legal avenues available to law enforcement in obtaining information. If material evidence exists that could help prove a case, of course law enforcement should want it. As Benton County Prosecutor Nathan Smith said in a statement released by his office, “As with any case, our obligation is to investigate all of the available evidence, whether the evidence proves useful or not.”
Amazon wrote in its motion papers that the state must be required to make a “heightened showing of relevance and need for any recordings,” which are not available from other sources, and demonstrate that there is compelling nexus between the criminal investigation and the requested data.
But is that enough?
“People don’t fully process that when you’re using smart devices to query some external entity, the information doesn’t stay in the home. And I think that’s a problem,” says Andrew Ferguson, a national expert on predictive policing and the Fourth Amendment. “I don’t think anyone who buys an Echo system is considering the Fourth Amendment, although perhaps they should be.”
With the growing prevalence of smart devices, where third parties are gathering and sorting information on cloud-based, third party servers, it is the type of overbroad request made in the Bates case – to obtain and use any potentially recorded audio – that the larger, and unresolved, problem lies.
The Supreme Court has yet to consider a case that specifically addresses whether, in an era of modern technology where we regularly choose to give personal data to third parties, a person should have an expectation of privacy in the information. As the law stands, once information is voluntarily disclosed to a third party, he does not. One case currently pending at the Supreme Court may tee up the issue of the Third Party Doctrine in the digital age, but until the Court takes on such a case, this premise holds true.
Arkansas law enforcement may have hoped that Bates’ smart speaker recorded the murder. So far the turned-over data hasn’t appeared to strengthen the prosecution.
As for the rest of the Bates case? Court records suggest that other smart-devices seem to have implicated James Bates in the crime. Utility reports from a smart-water meter evinced a dramatic increase in use during the early morning hours. This could support a theory that, after committing the murder, he hosed down the patio and washed away any remaining blood. The case was last adjourned for a continuance of the discovery hearing and is next calendared on July 27th, 2017.
“We have a real problem,” says one law professor. “There’s no such thing as an assurance of privacy.”
With so much being recorded, our smart homes will soon be able to paint fully detailed pictures of our private lives – and while the expectation of privacy could not be greater than in the home, privacy is not an enumerated right.
“It’s difficult to focus on the expectation of privacy with a technology that, almost by definition, is exposing private information. The doctrine kind of falters in the face of these new self-revealing technologies,” says Ferguson.
It’s interesting to try to make new technology and an old doctrine fit, but perhaps the correct approach to this new frontier of law is acknowledging that they don’t.
“Even in the home, we might think we’re having a private conversation, but in the long run, we have no idea what microphones are trained on us,” says Boneh. “Technologies are being used for different purposes than originally intended.”
Reidenberg has a similar opinion: “We have a real problem, there’s no such thing as an assurance of privacy. You have no idea what devices are out there – a cell phone recording or a third party listening.”
It seems the one thing technologists and lawyers alike agree on is that the “right” to privacy could be overcome by technology very soon. The danger is that the new standard will become: You have the right to remain silent, but your smart home does not.