Recently I dreamed that I had fallen asleep in the middle of the night in the back of a recreational vehicle (RV) that was cruising down an empty highway. I subsequently woke up to find the RV owner moving to the back and joining me in my slumber. I objected as no one was driving the RV. The owner, however, explained that we would be fine with the RV on auto-drive. I grudgingly accepted the explanation and went back to sleep. Would you have gone back to sleep?
It was, however, a two-act dream. In Act 2, I awoke and found that it was evening and the RV had entered a village that was crowded with pre-Christmas pedestrians crossing the road in random places and that giant snowflakes were falling. The driver was asleep. I thought about how the sensors in my non-auto-drive care frequently malfunction in snow, the somewhat slick road, the braking distance of an RV, and the potential that we would kill children. There was no time to wake the owner. I had to decide whether I, an inexperienced RV driver with a half-asleep brain, should take control of the vehicle – if only to get it pulled over – or whether I should trust the “intelligent system” to get us through the village without incident and onward to our destination. I deferred my decision by truly waking up. What would you do in such a situation?
You and I will ultimately need to make decisions to trust or override artificial intelligence (AI), just as the pilots Boeing 737 Max planes already have. There were tragically bad endings for the pilots and passengers of two Boeing 737 Max flights.
I am a data lover and believe that AI has and will improve our lives and safety, including via autonomous vehicles. But this dream left the fully-awake me with public health and policy questions:
- Will AI have the situational awareness to know its limits?
- Will AI take into account the cost of a bad decision such as the particular horror of killing a child?
- Will humans be awake when it is necessary to override AI?
- Will humans have the skills and confidence to override AI?
- Wil AI let humans override AI? (It appears that the pilots in the Boeing Ethiopia crash tried and could not override auto-pilot.)
- Who will we hold responsible when AI fails?