Law and Autonomous Vehicles

By Michael Angiulo, Friedman | Rubin, PLLP

This article was originally published on Dec. 6, 2021 Mich. St. L. Rev.: MSLR Forum. www.michiganstatelawreview.org/vol-2021-2022/2021/12/3/law-and-autonomous-vehicles

INTRODUCTION

The law is not ready for self-driving cars. While most of the anticipated legislation is concerned with questions of safety or privacy, the introduction and proliferation of autonomous cars will present novel legal questions in negligence and product liability cases, especially challenging the framework for agency. Although these are important issues for civil cases, the implications for criminal justice are the most urgent. Constitutional cases relating to automotive stops form the procedures and frameworks which protect our Fourth and Fifth Amendment rights.

Legal analogies can only be stretched so far before they break. The Court recognized this breaking point in Carpenter v. United States when it refused to extend the third-party exposure doctrine to cell site location information (CSLI) in the context of privacy. While the result was thoughtfully applied to a single scenario, the unwillingness to “embarrass the future” prevented the Court from creating useful, forward-looking rules of general applicability. In the absence of comprehensive legislation, we should expect to see a ruling attempting to analogize a runaway horse drawn carriage to a scenario involving a driverless car. And the ruling, at best, will be as disruptive as Carpenter, but will affect a broader range of fundamental rights.

Cars today already have some “autonomous” features like cruise control, and are controlled through many microprocessors and software. None of this progress has caused any significant legal issue. The move to fully autonomous vehicles, though, will cause discontinuities in both how cars work and how they are used. These changes will stretch existing legal analogies and doctrines to their breaking points which will yield to unpredictable application of criminal procedures, disproportionate impacts on both affluent and disabled operators, and will force unwanted tradeoffs between fundamental rights and convenience.

WHY AUTONOMOUS VEHICLES ARE DIFFERENT

Cars of the future will not have steering wheels. They will be built on software platforms that capture information from a broad array of optical, radar, and ultrasonic sensors. Someone, typically a single operator, will command the vehicle to proceed to a destination. The software will use artificial intelligence, including unsupervised learning algorithms, to plan routes, navigate traffic, and respond to unforeseeable conditions. Usually, the car will operate perfectly—following speed limits and traffic rules with inhuman precision. At other times, it will operate unpredictably, at least as far as the occupants are concerned, choosing routes optimized for safety or efficiency, without explanation. Every moment of every trip, taken by every similarly designed car, will be captured in detail, and will be combined into training data to improve future performance.

These changes in how cars work will lead to changes in how cars are used. A vehicle that can operate autonomously does not need to wait in the parking lot while you sit at work all day. It can run errands or be shared with other drivers. Fleets of vehicles can be used in aggregate to reduce traffic and parking congestion while reducing the cost of ownership for individual drivers. Autonomous vehicles can drive through the night, allowing passengers to sleep, even during charging stops. While law enforcement has faced pushback in the use of drones for persistent surveillance, it is unlikely that the presence of unoccupied police vehicles, especially when unmarked, will even be noticed.

CHALLENGES WITH APPLYING PRECEDENT

These changes in how cars work and how they will be used undermine the principles and logic that justified many controlling cases, especially those on which police rely when making pretextual traffic stops which disproportionately affect Black male drivers.1

Furthermore, these issues will have a disproportionate impact on different classes of people, whether they are affluent people able to be early adopters of expensive technology or disabled people who will be the first to rely on these as mobility-enhancing necessity.

A. Predictable Behavior

Police can generally find reasonable articulable suspicion to stop a vehicle by simply following it in traffic for long enough. In Whren v. United States, the Court explained that being followed by the police for fifteen minutes may feel like a seizure, but it is not—fifteen minutes is usually enough to establish the necessary suspicion for even a pretextual stop.2 However, what if an autonomous vehicle is incapable of making a mistake? An expensive vehicle capable of following traffic rules with precision can effectively create a shield from the most common police tactic for justifying a stop. This will inevitably lead to an increase in socioeconomic and racial disparities in stops.

In response, police might find ways to exploit the predictable behavior of vehicles to expose drivers to stops that would have otherwise been unlawful. For example, a vehicle could respond to police lights by pulling over and remaining there until the lights are turned off. In Illinois v. Caballes, the Court held that a traffic stop could not be unreasonably prolonged to wait for a drug sniffing dog.3 Imagine the officer verbally told the occupants they were free to go, but in leaving his police vehicle in position with the lights on, the autonomous vehicle would simply stay put indefinitely. Has the officer extended the stop? Would it matter if the Court was unwilling to consider the subjective intent of the police? A disabled occupant could be subject to a search that would have been unreasonable in absence of the technology. In an opposite example, the Court in Drayton has said even the most minor gesture in response to an officer’s request can constitute consent for a search. Imagine an officer approaches a vehicle and asks the passenger, “may I open the trunk?” If the trunk opens automatically in response to a voice command, is it consent?

Furthermore, in Navarette v. California, the Court held that information regarding a vehicle’s specific description, place, and time can be treated as reliable eyewitness knowledge.4 In that case, even with “unimpeachable” driving, the police had reasonable articulable suspicion to make a stop. If these new cars are predictable, it will likely increase the risk of revenge tips as Justice Scalia warned in his dissent.

B. Unpredictable Behavior

Autonomous vehicles will make countless momentary decisions that cannot be explained without deep forensic analysis. Some of these decisions may include simple tasks like traffic routing, but some will be completely new.

In Illinois v. Wardlow, the Court recognized that fleeing in a “high-crime” area affects the analysis of reasonable articulable suspicion.5 If a car chooses such a route, should it matter? Anyone who has taken a traffic optimized route using a product like Waze has found themselves driving in an unusual place for the first time. Having that be a component to support a reasonably articulable suspicion is problematic.

In addition, autonomous vehicles can negotiate with one another to yield right of way or can choose to follow each other extremely closely, creating an extremely efficient virtual train that minimizes aerodynamic drag. The propinquity argument established in Maryland v. Pringle was applied to a group of individuals in a car that contained drugs.6 Should a set of vehicles, obviously travelling in unison, be considered proof of a common enterprise? What if there were three, where the vehicle in the middle was unoccupied but full of drugs?

C. Data Retention

Autonomous vehicles will use arrays of sensors, both inside the vehicle and out, to inform the controlling software of every variable. These sensors will include cameras that capture high resolution visual images as well as radar and lidar sensors that can measure and detect movement. All this data is processed on board the vehicle and much of it will be transmitted back to the manufacturer. The data captured, processed, stored, and transmitted by an autonomous vehicle is many thousands of times more comprehensive than what phones can capture today. Some of the sensors may even fail the Kyllo v. United States test for not being in “general public use,” at least during the early adopter stage.7

In Riley v. California, the Court held that police need a warrant to search a cell phone, unless exigent circumstances apply.8 Certainly, the data stored in an autonomous vehicle would be considered extensive and personal, as it likely captures not only travel information, but also conversations that happen within the vehicle.

But the sheer quantity of the data means that the system will be constantly overwriting and deleting contents. Will the impending loss of data create an automatic exigency? The Court did not address this directly, as cell phones can be turned off or stored in transmission-resistant bags while obtaining a warrant (where cars cannot). Furthermore, it is common police practice to download the contents of the event recorder (“black box”) in an accident investigation. Is this practice still constitutional, given the new quantity and quality of captured data? If law enforcement is justified in gathering this data from accident vehicles, should that extend to nearby, but uninvolved, autonomous vehicles that could provide the testimony of a “super eyewitness?” The Court in California v. Byers held that compelling a hit and run defendant to stay at the scene was not a violation of compelled self-incriminating testimony because the need to regulate traffic safety must be balanced against constitutional protections.9 Perhaps that same logic could be extended to compel these new vehicles, as “super witnesses,” to testify. Or the state may argue that the safety risks posed by autonomous vehicles render their operation as a “closely regulated” industry as defined in New York v. Burger.10 In any case, given how comprehensive the data collection will be, there will always be a reasonable basis to believe that evidence of almost any traffic arrest will be in the car, thus supporting the Scalia prong of Arizona v. Gant.11

It is likely that this extensive privacy risk will first be exposed, if not tested, in a border crossing case. Under United States v. Flores-Montano, if the police can disassemble a gas tank without reasonable suspicion, they will likely be able to inspect the contents of the computing subsystems that expose every trip every passenger has ever taken in the car since it was built—and potentially even trips that other cars have taken which have been included in the vehicular training data.12 The safety risks involved in the vehicular data may be more “tethered to the government’s interests” and the contents are less likely to run afoul of the First Amendment protections articulated by the First Circuit in Alasaad v. Mayorkas.13

D. Fleet Uses

Because the cars will be able to reposition themselves without drivers, there will be new, and much more efficient ways to own and use them. These include shared pools of autonomous vehicles, including as public transit.

Under Chandler v. Miller, the Court explained that where the risk to public safety is substantial and real, blanket suspicionless searches calibrated to the risk may rank as reasonable if done for safety purposes.14 Lower courts have found that suspicionless searches in airports, on subways, and on ferries can be reasonable. None of these cases held that more than a single person needed to be in a public transit vehicle for a search to be reasonable. If the government operated a fleet of autonomous vehicles as a component of a mass transit program, would that mean that under Miller, every occupant in every vehicle, even if alone, could be stopped and searched? Would it matter if there was a credible threat that someone in a vehicle was carrying a bomb?

In the private sector, it is likely that these vehicles will either be shared as a pool or will be temporarily sub-leased by individuals to defray ownership costs. Under United States v. Matlock, where one party has joint access or control for most purposes, he can consent to a search for evidence against another.15 While that holding was applied to homes, could it extend to shared autonomous vehicles? Perhaps not, but there may be two lines of cases that intersect, causing a new problem. Under California v. Carney, the test for whether a vehicle is a home considers whether a vehicle is readily mobile, on public streets, and is being used as a home.16 But a new class of recreational vehicles, designed for long term sleeping and able to stay in unattended motion, may change the analysis. Will the owner be considered to only be like the hotel clerk in Stoner v. California17 or can he give consent to a search while a joint user is renting? While it may seem like a stretch to imagine a subscription to a shared fleet of autonomous homes, one only needs to think back a decade to realize what a stretch it would have been to consider the evolution of cell phones to where they are today.

CONFUSION IN COURT

In the absence of legislation, courts will have no choice but to stretch these existing analogies to fit new fact patterns. In United States v. Pritchard, the Seventh Circuit explained that a magistrate’s determination of probable cause should be “given considerable weight.” However, when dealing with technology, it may be unwarranted to rely on the common sense or judicial experience of a magistrate to determine what is reasonable. Just this year, the USPTO wrestled with the question of how to treat Artificial Intelligence (AI) in the cases of the DABUS Patents, where an applicant attempted to file a patent for an invention created by AI. Although the Agency held that AI had no “legal personality” enabling it to qualify as an inventor, these issues are only just being argued for the first time. Especially with respect to cases with criminal consequences, it is unlikely that the lowest level of our judiciary is the place to establish findings that deserve any deference at all.

In addition to magistrates who, as a class, do not tend to be on the leading edge of technology adoption, police officers will be excused for arrests where they can find alternative justifications after the fact, even when no law was violated, like in Devenpeck v. Alford.18 And under Heiein v. North Carolina, police can be excused for any reasonable mistake of law.19 Given the fact that each state will regulate autonomous vehicles differently, these mistakes will be common, if not inevitable, even though many of these cases should not be turning on the vagaries of state laws in the first place.20

Even less predictable are legal tests which rely on community standards, such as in the case of the reasonable expectation of privacy test in Katz v. United States, the court would still have to grapple with the question as to whether there is an “expectation that society is prepared to consider reasonable.”21 In the case where society at large has not experienced and does not understand the technology, this becomes challenging—if not impossible.

Autonomous cars will cause problems for cases even when they are not at the center of the controversy. Take, for example, the case where a sobriety roadblock is established, where the police are going to stop every fifth car. Under Michigan Department of State Police v. Sitz22, this program is permissible, as police are not engaging in a random suspicionless stop, which is prohibited under Delaware v. Prouse23. But what if, for example, the fifth car in line was autonomous, with only a sleeping passenger in the back. Assuming that the occupant cannot be guilty of a DUI, should the police stop it anyway? Or should they skip it and stop the sixth driver in line? Either decision may be problematic, and circuits will inevitably split. Or consider the use of a new stingray that allows communication—potentially even control—with a vehicle. Will that be a meaningful interference of the possessory interest of the car, as was the case in United States v. Jones24, or would it matter that the interception was just of metadata being shared with a third party?

CONCLUSION

Just as our constitutional rights should not turn on the vagaries of state laws or private contracts, such as the terms of use for the vehicle, they should not passively drift in reaction to the advancement of technology. In the absence of comprehensive litigation, the Court has found it necessary to declare limits to the erosion of constitutional protections when common behavior changes. Today, we often trade our privacy rights for convenience when we use mobile devices and internet services and the Court has responded with cases like Carpenter.

Autonomous vehicles will create the same issue, where safety and convenience are offered only in trade for infringements of our constitutional rights.

The worst way to adapt the law to new technology scenarios is to wait for the law to break down. Judge made law is too slow, too imprecise, and too difficult to apply to rapidly evolving fact patterns. The better way is through comprehensive legislation. There is a lot of effort being devoted to the regulatory frameworks for autonomous vehicles, but they prioritize safety issues before these second-order effects on criminal procedure. Legislation like Title II or the Stored Communications Act (SCA) are good examples that should be directly extended to cover the kind of data collected, processed, and transmitted by these cars. But it will not be enough. I think the patent office got it wrong, and we should do something more transformative. We need a legal framework to treat Artificial Intelligence as a person—capable of forming independent intent. While that sounds radical, it is less of a stretch of legal fiction than treating a corporation as a person, especially in the context of insulating the rights of citizens from encroachment from technology.


Michael Angiulo has certainly led an impressive career. Mr. Angiulo is a mechanical engineer and former Microsoft Corporate Vice President where he led the Microsoft Surface and Xbox hardware teams. After twenty-five years in product development, he earned his J.D. with High Honors from the University of Washington and joined Friedman Rubin PLLP as an attorney Of Counsel. Mr. Angiulo’s practice at Friedman Rubin PLLP currently focuses on litigating accidents involving complex mechanical, software, and systems failures. Mr. Angiulo is also a Senior Executive Advisor with Envorso, helping companies develop the software necessary to create the next generation of autonomous vehicles.

1 See generally, Kelsey Shoub et al., Race, Place, and Context: The Persistence of Race Effects in Traffic Stop Outcomes in the Face of Situational, Demographic, and Political Controls, 5 J. of Race, Ethnicity & Pol. 481 (2020).

2 See Whren v. U.S., 517 U.S. 806, 810–13 (1996).

3 See Illinois v. Caballes, 543 U.S. 405, 409–10 (2005).

4 See Navarette v. California, 572 U.S 393, 398–401 (2014).

5 See Illinois v. Wardlow, 528 U.S. 119, 124 (2000).

6 See Maryland V. Pringle, 540 U.S. 366, 372 (2003).

7 See Kyllo v. U.S., 533 U.S. 27, 34 (2001).

8 See Riley v. California, 573 U.S. 373, 387–88 (2014).

9 See California v. Byers, 402 U.S. 424, 428–31 (1971).

10 See generally New York v. Burger, 482 U.S. 691 (1987).

11 See Arizona v. Gant, 556 U.S. 332, 352–54 (2009) (Scalia, J., concurring).

12 See generally U.S. v. Flores-Montano, 541 U.S. 149 (2004).

13 See generally Alasaad v. Mayorkas, 988 F.3d 8 (1st Cir. 2021).

14 See Chandler v. Miller, 520 U.S. 305, 323 (1997).

15 See U.S. v. Matlock, 415 U.S. 164, 169–71, 179 n. 14 (1974).

16 See California v. Carney, 471 U.S. 386, 392–95 (1985).

17 See generally Stoner v. State of Cal., 376 U.S. 483 (1964).

18 See generally Devenpeck v. Alford, 543 U.S. 146 (2004).

19 See Heiein v. North Carolina, 574 U.S. 54, 57 (2014).

20 See, e.g., Virginia v. Moore, 553 U.S. 164 (2008).

21 Katz v. U.S., 389 U.S. 347, 361 (1967) (Harlan, J., concurring).

22 See generally Mich. Dep’t of St. Police v. Sitz, 496 U.S. 444, (1990).

23 See generally Delaware v. Prouse, 440 U.S. 648 (1979).

24 See generally U.S. v. Jones, 565 U.S. 400 (2012).