MINNEAPOLIS, Minnesota. According to Wired, Waymo, Google’s self-driving car division, recently received a California DMV permit to test their vehicles on the road-without a human driver behind the wheel. It’s a big step for autonomous vehicle research and development. Yet, the question remains: are we really ready to let artificial intelligence programs drive our cars? Is the technology robust enough to be safe on public roads without a human babysitter? Given recent reports of accidents, there is good reason to be vigilant, while also optimistically watching to see what the outcome of this research might be.
According to one writer for the New York Times, artificial intelligence programs lack understanding of the systems they navigate and therefore continue to make major errors that can sometimes be catastrophic. When AI systems work well, they can almost work too well, exhibiting human-like precision in completing the basic tasks they are designed to do. AI has already been successfully implemented into chatbot software (what is chatbot?). This can create a sense of complacency. Yet, when AI systems fail, they often fail spectacularly, and with devastating consequences. We have all by now read about how glare led to the failure of Tesla’s autopilot system in Florida, and how a series of failures led to the deadly pedestrian crash in Arizona involving an autonomous vehicle.
Because AI systems lack self-awareness, they sometimes fail when encountering unfamiliar situations. Driverless cars, for example, have trouble with roundabouts and in situations involving ambiguity. The Times also notes that self-driven cars could be vulnerable to hackers and in some cases, they may not even need a hacker to mess with the system. Modifications to street signs by graffiti artists can create problems. This, of course, is a technology that is constantly developing and improving with some of the top talent in artificial intelligence at the head of the projects. Those looking to contribute to the effort of advancing this technology may want to look for career opportunities here – https://torc.ai/careers/.
Finally, AI systems are not able to process subtle cues that human drivers can process-a glance from a pedestrian while crossing the street, the wave of a hand from a driver that tells a bike rider that it’s okay to turn. As Salena Conner of Utah United (http://www.utahunited.org/) puts it, “What we have right now is a narrow AI, or AI that is fantastic at completing a single or limited set of tasks.” AI systems are not humans. When they fail, they fail like computers fail.
When it comes to personal injury claims, claims against AI systems remain uncharted terrain. Who will be to blame if an AI system fails and results in a personal injury? Will it be the computer programmer? The car manufacturer? The company selling the car? Or, will the systems still require some kind of human oversight, much like many autopilot systems require?
Only time will tell.
Until then, drivers remain responsible for their actions behind the wheel. Distracted driving, drunk driving, and human error remains the main causes of vehicle accidents. The Law Office of Martin T. Montilino are personal injury lawyers in Minneapolis, Minnesota who are closely watching the ways that artificial intelligence will change the ways we drive and the ways that accidents are prevented. Our firm knows all too well how devastating car accidents can be. If you or a loved one have been hurt in a crash, visit us at https://martinmontilino.com/ to learn more about your options and rights under the law.
THE LAW OFFICE OF MARTIN T. MONTILINO, LLC
3109 Hennepin Avenue South
Minneapolis, MN 55408
Phone: (612) 236-1320