The tragic death of a Tesla Model S electric sedan driver on June 30 is a sharp reminder that although autonomous vehicles are ultimately expected to be safer than their non-autonomous counterparts, they will never be completely safe.
While this accident was the result of the car failing to appropriately apply the brakes before a crash, we should also consider that an autonomous vehicle may one day cause a death after making a “correct” decision for which it was intentionally programmed. In such a case, the term “accident” may not even apply.
If such a scenario does take place, what impact will it have on autonomous cars and their public image? What impact will it have on legislators?
In May, Wikistrat ran a simulation in which more than 50 experts were challenged to respond to a scenario in which an autonomous vehicle is involved in a fatal incident caused not by externalities or an internal malfunction, but because of the car behaving exactly as intended – i.e., making the “correct” decision in a particular situation.
The purpose of the exercise was to stress-test risks and potential issues associated with the chosen scenario, understand how they might spiral, and forecast the direction in which they might ultimately turn. This exercise identified issues as well as blind and soft spots which future autonomous vehicle manufacturers should be paying attention to today.
In the simulation, the analysts were divided into four teams across two mirrored groups – one group playing an unidentified corporation that produces autonomous vehicles, and another playing the California State Legislature. We assigned two teams to each actor to see how unique decisions would lead to different results – which they did.
Click here or on the cover image to download the report and learn more.
About the author
Wikistrat Research and Projects Team Leader