It’s no secret that the auto industry is slowly moving towards self-driving vehicles. Since Tesla (and others) first introduced this technology, though, the debate immediately began: Are self-driving cars, powered by a computer and sensors, as safe or safer than cars driven by humans? For years, Tesla and Google have touted that throughout all of their testing, not one autonomous vehicle had been in an accident that was that car’s fault (i.e., any collisions were the fault of the other drivers). Proponents were touting the safety of computer-driven vehicles, saying the system does not suffer from the same issues as human drivers, namely fatigue, distractions, and poor judgment. Suddenly, however, those years of studies and successes were turned on their head, as a driver of a Tesla Model S electric sedan was killed in an accident when the car was in self-driving mode.
While federal regulators have opened up a formal investigation, the preliminary reports suggest the crash occurred when a tractor trailer made a left-hand turn in front of the Tesla, and the self-driving car failed to apply the brakes. Tesla released a statement saying, “Neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied.” Does this mean that even the computer system can suffer the same “human error” as we do?
Technology is great, and it helps to advance our lives in ways we never had previously imagined. Perhaps the time has come, though, to take a step back to determine if the technology advancement is truly at the level where it is safe for everyone on the road. The hope is that self-driving cars reduce collisions that result in death and catastrophic injuries, but this unfortunate incident, while only constituting one bad outcome, must raise some red flags.