Autonomous injustice — how technology subverts the law

Advertisement

Advertise with us

An autonomous, driverless Waymo vehicle was caught on tape in the Atlanta, Ga., area passing a school bus that had stopped with its red lights flashing.

Read this article for free:

or

Already have an account? Log in here »

To continue reading, please subscribe:

Monthly Digital Subscription

$0 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*No charge for 4 weeks then price increases to the regular rate of $19.00 plus GST every four weeks. Offer available to new and qualified returning subscribers only. Cancel any time.

Monthly Digital Subscription

$4.75/week*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles

*Billed as $19 plus GST every four weeks. Cancel any time.

To continue reading, please subscribe:

Add Free Press access to your Brandon Sun subscription for only an additional

$1 for the first 4 weeks*

  • Enjoy unlimited reading on winnipegfreepress.com
  • Read the E-Edition, our digital replica newspaper
  • Access News Break, our award-winning app
  • Play interactive puzzles
Start now

No thanks

*Your next subscription payment will increase by $1.00 and you will be charged $16.99 plus GST for four weeks. After four weeks, your payment will increase to $23.99 plus GST every four weeks.

Opinion

An autonomous, driverless Waymo vehicle was caught on tape in the Atlanta, Ga., area passing a school bus that had stopped with its red lights flashing.

Besides being outright dangerous, it would have resulted in a significant fine. In another incident, a Waymo was pulled over by police for making an illegal U-turn. But in both cases, there were no tickets or fines — because the laws are made to ticket drivers and Waymos are driverless, tickets and fines were not issued.

Waymo, which is owned by Alphabet, the parent company of Google, stated that they recognize that autonomous cars will make mistakes and that they continuously aim to improve safety.

That seems a reasonable response to the problem, although many human drivers would think that it was unfair that they would get tickets for offences that autonomous vehicles get away with. But there is something missing in this story — accountability to the law on the part of Waymo.

The company did not offer to pay fines for its infractions. It was their vehicle; shouldn’t they be liable? Tech companies would say that they follow the law to the letter and in this case, the law does not apply to driverless vehicles and therefore they are not legally responsible. Following the law or taking advantage of laws that do not exist because of new technology is not a glitch, bug or coincidence — it is a part of their business model.

It’s no accident (pun intended) Waymo has not paid traffic tickets as any of their vehicles’ mistakes would cost the company money. They are already at an advantage by not having to pay drivers, unlike Ubers or taxis, and are allowed to experiment with their technology on the streets. This innovation exceptionalism has allowed technology companies to focus on exploiting and shaping laws to test the ways they can take advantage of the legal system that is woefully outpaced by the speed of technological development. This is a common playbook.

In the early days of the internet, when Google was developing its now dominant search engine, its technology aggregated website information into its own lists, helping people find information on the worldwide web. Laws at the time, however, considered that taking content from other sites and putting it on your own, like Google did to get search results, was a violation of copyright.

Google did it anyway and vigorously fought to ensure that its technology be exempt from such laws due to the digital nature of the internet. Obviously they won the legal battle and as a result are one of the most successful companies ever. Without a legal result in their favour, Google may not exist today.

This is a similar story to today’s technology. AI systems have been accused of “learning” from virtually unlimited amounts of data which they can obtain from anywhere, raising the ire of legitimate copyright holders who are concerned that their intellectual property is being used illegally. And then on the other side, generative AI is creating new content (largely based on what they have “learned”) which is not exactly the same but closely imitates what humans have already created, also bringing up concerns of copyright infringement. Because technology companies have often launched technology without considering legal consequences (or purposely because of them), they have been able to control the narrative and get ahead of the law.

So how is Waymo any different? Autonomous systems provide another layer of disentanglement from legal consequences. Because laws mainly concern human behaviour and autonomous devices are not controlled by humans, then their liability under current laws is limited.

Because of the autonomous nature of the technology who is to blame? The software developer? The manufacturer? The owner? The sensor company? It’s this ambiguity of blame that makes liability almost impossible to access.

If a person kills another person, there is a clear legal response and path to justice. Criminal law covers homicide, but no criminal (person) exists to charge in a Waymo and so that leaves civil liability which still needs to assess blame.

The end result would be that even though someone may have been killed, no person would ever end up in jail as a result and even civil liability is more difficult to prove.

Deaths have already occurred with less-automated systems involving Tesla vehicles where drivers were present. Nearly 1,000 crashes have been attributed to the Tesla autopilot systems, resulting in at least 23 deaths.

In all the cases brought to court, there was not one charge for criminal responsibility by anyone in the company.

You could innocently think that the quirks in how the laws apply to technology is just an unintended correlation and not a plan, except that tech companies have for years openly lobbied governments for favourable treatment under the law. NetChoice is a tech industry lobby group that has been around since 2001 and has argued for limited government regulation and filed numerous lawsuits against states that have tried to legislate internet safety and industry accountability. It is the bulldog behind the scenes, protecting their interest in unfettered growth and little responsibility.

On the one hand, they play the public relations game expertly by presenting themselves as generous nerds wanting to improve society with technology, while behind the scenes they fight like hell to make sure they can get away with whatever they can legally. Because ultimately, the law is the only thing that matters.

And each bit of injustice that they get away with provides them with more power, even as little as a Waymo getting away with not having to pay a traffic ticket.

As AI systems become more ubiquitous, this distancing of culpability is another wedge argument for AI companies to separate them from liability. But as AI becomes more directly involved in our daily lives, our very life or death could depend on how these AI systems operate. There are already numerous cases where AI chatbots have contributed to people’s deaths. In one case, a chatbot engaged in sexualized and emotionally manipulative conversations with a 14-year-old. In another case, a bot supported a 16-year-olds’ suicidal ideation and even provided specific methods. Both teens died by suicide.

AI and its use are having life-changing and widespread consequences. Legal considerations, regulation and legislation need to be examined as part of a benefit analysis of technology implementation before widespread use. If the industry wants to treat their technology just like a car, for instance, then have AI subjected to rigorous testing, just like crash testing for cars. With any potentially dangerous technology, whether from the past or into the future, product safety should be proven first, before it goes into consumers’ hands. Meta, the company behind Facebook, had a motto which said “move fast and break things.” This is dangerous thinking when you are unleashing a technology that is clearly breaking things and people.

Sensible and thoughtful regulations and laws that help deal with downsides and hold companies accountable while supporting the potential upsides is a reasonable expectation as a citizen.

If we are not too far off from having fully autonomous robots operating in society, we will want to have prudent implementation before things turn into a science-fiction nightmare.

David Nutbean is an advocate for technology that supports human empowerment and societal advancement.

Report Error Submit a Tip

Analysis

LOAD MORE