Friday 4 May 2018

What Would Captain Kirk Do?


“Suppose you are in a self-driving car going across a narrow bridge, and a school bus full of children hurtles out of control towards you. There is no room for the vehicles to pass each other. Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children?”
-- Steven Poole, Rethink

Like Captain Kirk, I don’t believe in the no-win situation, so the hypothetical scenario presented here by Steven Poole (and first posed by Professor Gary Marcus in Moral Machines) looks tricky. However by asking ourselves “What would Captain Kirk do?” we can see that there are several possible solutions that don’t involve us or the bus ending up deded. Captain Kirk, after all, has some experience in defeating renegade computers and artificial intelligences.

Perhaps the most obvious way out is to program your self-driving car to not self-drive across narrow bridges, thus avoiding this situation in the first place. If a self-driving car cannot be trusted to go over a narrow bridge without the distinct possibility of it plunging over the side in an unnecessary act of self-driving-sacrifice, then I suggest it should avoid narrow bridges altogether.

It is also worth considering some of the unspoken assumptions behind the scenario. Why should we think that the only two options are to drive off the bridge or to collide with the bus? Why doesn’t the self-driving car simply reverse out of the way of the oncoming bus? Are we supposed to conclude that your self-driving car is going too fast to allow this? As a cat, I have only a limited experience of crossing narrow bridges in cars, but I think that normally you cross them very slowly and carefully precisely because of the possibility of something coming the other way.

And finally, if we examine the exact wording of the problem as Steven Poole presents it, we see that he asks “Should the self-driving car take the decision to drive off the bridge and kill you in order to save the children?” (my emphasis) This makes it sound like these are two separate steps, not causally related. Why can the self-driving car not do the first part without the second – drive off the bridge but not then proceed to kill you? Is killing you an unavoidable consequence of your self-driving car going off the bridge, or is it a calculated action on the part of the self-driving car to prevent you from suing the manufacturers for your ending up wet in a ditch and with a no-longer-self-driving car?

So what would Captain Kirk do? Well judging by his past performance, I think he would hack into the murderous self-driving car’s memory banks and reprogram it to reverse out of the way of the bus, after which the mad genius that programmed it to try to kill Captain Kirk after first driving him off a bridge would say “No, stop, I created you!” before getting run over by his own creation. Meanwhile Captain Kirk would have gotten on board the school bus and wrestled it back under control, saving the day. And then he, Mr Spock and Dr McCoy would all beam back to the Enterprise for a final scene of comic misunderstanding at Mr Spock’s expense.

No comments:

Post a Comment