How Many Times Has Your AI System Broken the Law?

By Alan Brill Senior Managing Director, Cyber Risk Practice, and Institute Fellow, Kroll
Share
Alan Brill Senior Managing Director, Cyber Risk Practice, and Institute Fellow, Kroll on AI systems and the importance of regulations

Developing an artificial intelligence (AI) system is an intensive process. The first step is figuring out what the system should do and how it should be done. It’s important to determine how the software can evolve based on both the training data sets provided and what the system’s experience “teaches” it.

In many cases, the company’s legal department is involved in drawing up contracts involving the system—such as for software licensing or professional services, like development specialists or an outsourced development effort. Legal involvement is vital, because there are important elements—like ownership of the final product—that needs to be correct right from the start.

But after that, it seems like the lawyers largely go away until there is a problem.  Doing this may  not be the best idea. A company’s lawyers (or external counsel) will have a different point of view and raise important questions that would not be raised otherwise, unless there is a problem where there is an obvious legal issue.

Systems exist in the real world, and the real world has laws.

If a company were to ask the question, “Should our new AI system follow the laws that are out there in the real world?” Most people would answer yes.

But what if that question was never asked, or wasn’t considered – or the developers had their own definition of what a law required? Would the system not exactly follow the law, but do what the developers may think is “close enough?”

Earlier this year in an issue with some electric vehicles running a test version of what was called a self-driving mode, but which the manufacturer claimed still needed an operator to oversee the operation of the vehicle.

The law is clear—when approaching a stop sign, come to a full stop, check for traffic coming from another direction and proceed past the sign once it is safe to do so. Anyone who has ever been stopped or ticketed by the police might be told that “you didn’t actually stop,” or that you made a “rolling stop.” The legal requirement should be obvious: a stop sign means that you must come to a complete stop. For example, California Vehicle Code Section 22450(a) is pretty clear: “The driver of any vehicle approaching a stop sign at the entrance to, or within, an intersection shall stop at a limit line, if marked, otherwise before entering the crosswalk on the near side of the intersection.” There are no exceptions for cars where the “driver” is arguably an AI program.

In a now-modified version of the software, the development team programmed the car that under certain circumstances when approaching a stop sign, it wasn’t necessary to come to a full stop. Instead, the car could roll through the intersection. There were conditions that the manufacturer said had to be met before the car would “decide” not to do a full stop. According to the National Highway Traffic Safety Administration’s (NHTSA) Safety Recall Report 22V-037, those included:  the stop sign was 4-way, the speed limit on the road was no greater than 30 miles per hour, the vehicle was not moving at greater than 5.6 miles per hour, the system didn’t perceive vehicles, pedestrians or bicycles near the intersection and the system determined that it had sufficient visibility before reaching the intersection. If all those conditions were met, the car could proceed without stopping, but no faster than 5.6 miles per hour.

It should also be noted that following two meetings with NTHSA, the company involved voluntarily carried out the recall of the rolling stop functionality through an over-the-air (OTA) software update.

The reality is that a sizable portion of drivers do not come to a complete stop, regardless of the conditions. I don’t believe there are many drivers who can say “Yes, I always come to a complete stop at all stop signs, even if it is in the middle of the night, and I can see that there are no cars, people or any other hazard. Rolling stops happen in the real world—but do we want AI systems mimicking this human behavior, of deciding which laws to follow and which to ignore? While no accidents were associated with the rolling stop feature, what if one occurred? Would the argument that most humans do not come to a full and complete stop, so self-driving vehicles should be allowed that same discretion, hold up in court? Would a jury find that persuasive? What if someone was injured? 

In short, if there is a law, the AI system should be programmed to follow it, even if the developers believe that many people don’t comply. For example, if the speed limit on a road is 65, should a car’s AI system be able to use its sensors to determine that a speed of 90 is safe?

Location, Location, Location

Another consideration that AI systems for self-driving vehicles must address is location. In Florida, for example, one can lawfully make a right turn at a red traffic light (unless there’s a “no turns on red” sign) if it is safe to do so. In New York City, on the other hand, right turns on red are prohibited unless there is a sign at a specific intersection allowing “right on red after stop.” Does an autonomous car need to be programmed for both laws and apply the one that’s appropriate, perhaps based on a GPS reading? Or do you simply say that the New York City law is an anomaly and not program the car to follow it?

While this is an example, it should be obvious that in developing an AI system, it is necessary to understand the laws that apply to that system while also taking into account the locations in which it is going to be used. For U.S. companies with operations in the European Union, making sure that the system complies with applicable EU laws and regulations is important. But those laws and regulations may be something a U.S.-based development team is not aware of.

Because no system operates in “cyberspace” and has real-world consequences in real-world jurisdictions, making sure that the development team is aware of all of the laws and regulations that a system would have to comply with is a basic requirement of any AI development project. But without seeking guidance from competent legal counsel, that may not happen, and the consequences in terms of civil liability, reputational damage and even potential criminal liability may be substantial and significant.

Share

Featured Articles

Responsibility in the Age of AI: O’Reilly President Examines

O’Reilly President Laura Baldwin discusses the legal challenges unmitigated and unobserved use of Gen AI may present to enterprises

Schneider Electric Enhances AI Data Centre Operations

Schneider Electric teams with Nvidia to advance AI data centres, whilst emphasising global sustainability in energy management

How Can AI Firms Pay Publishers? Perplexity Has a Plan

AI search firm Perplexity extends its content licensing programme to 14 new media partners, offering revenue share and API access for publisher content

PwC and AWS Join Forces on Enterprise AI Controls System

AI Strategy

How Amazon Nova is Redefining AI for Enterprise Solutions

AI Strategy

MHP Study: AI Reshapes Global Auto Industry Trust Landscape

AI Strategy