AI Safety Improvement Process Framework

AI Safety Improvement Process Framework

In the short history of autonomous vehicle accidents most of the problems were either the AI or the systems that interact with AI. One example is Google’s assumption of an expected deceleration applied to the bus trajectory output of the AI perception. Perception correctly detected the bus, but downstream functions incorrectly assumed a deceleration rate more typical for cars. Another example is the Uber decision to reduce braking commands and disable Automatic Emergency Braking under the assumption that the functionality was already duplicated by the AI elements and the safety driver – an assumption violated later when a filter was added to discard suspected false positives.

This project will develop and evaluate a hazard analysis method that engineers can use to identify and prevent such unsafe engineering decisions including flaws in design, requirements, and assumptions. This includes not only the AI process itself but the involvement of machines and operators that make decisions from the initial AI setup, training, real-time deployment, individual and crowd-sourced learning updates, disposition, regulation, and infrastructure.

Key Personnel

John Thomas