Where Can I Take You? - Automatonous Cars

I will admit, I do not drive, I have no license, I had a permit and drove my parent's pickup across the country and decided that being in control of a two-ton death machine was not for me. With a fleet of tiny two-seat cars around town, with which I can do so much less damage to my surroundings, I'm tempted to ditch my bicycle and get behind the steering wheel again. The only thing that's holding me back is the firm belief that if I can hold out just a couple more years, the whole problem will be settled for me, and I can climb in behind the non-existent steering wheel of a fully autonomous vehicle.
The question that has given me the most pause in my love and longing for self-driving cars I came across a couple weeks ago, when I was perusing info on the new Tesla update  ( https://www.youtube.com/watch?v=3yCAZWdqX_Y ) was "if my self-driving car gets into a position where someone is going to die, no matter what it does, who is it going to decide to save?" I love that, even though, yes, this is 100% a programming problem and not really an AI problem as such, we are still in the position where we have to give our machines at least some extension of our ethics.
What was interesting to me was the desire of the person posing the question was a desire for the car to save its passengers, even if that meant the bus full of schoolchildren goes pitching into the river. The question of these ethical dilemmas  being programmed into the AI removes any trace of in-the-moment-what-would-you-do question and necessitates a clear look at our choices. I would prefer the car that saves the schoolchildren - my doubt in my ability to make this choice in a car-of-today is what's stopping me from driving now - but I can see the desire of those who would chose the opposite. I just find it so interesting that WHAT we are telling our techno-servants to do IN THIS TIME is laying bare our own prejudices and desires so plainly, as their own metaphors.

Comments

  1. Nice blog! It’s interesting to think about how autonomous cars should react to situations where an accident is inevitable. They could definitely be programmed to always act a certain way when this happens and they’d probably be programmed to put their own passengers’ lives above anything else. To get into the ethics of an autonomous car’s “decision” obviously doesn’t work since, in our current time, it isn’t possible to give AI morals. The AI in an autonomous car would still be very human-like in some ways, such as how it would sense objects around it and act in response to those objects, but it wouldn’t be able to make decisions based on what it thought was the right thing to do as that would require a moral compass. However, if AI’s become more able to make decisions to achieve the best possible outcome, such as an autonomous car deciding to act in a way that would save four people at the cost of three, this could in a way start to mimic the ethics of humans and increase the similarities between people and machines. Advancement of AI beyond this very black and white form of morals, though, would drastically blur the line between humans and robots, as feeling compassion and making decisions based on morals are abilities that play a huge part in defining humans.

    ReplyDelete
  2. interesting blog! but let say if the car can be programmed to save the school bus, how would the car know if there is children in the bus? and let say if you were to drive the car yourself, human reaction will not let you put yourself in harm ways, the reaction will do what it takes to save you. if you programmed the car that way then are you just calling your own death sentence?

    ReplyDelete
  3. This comment has been removed by the author.

    ReplyDelete
  4. I totally respect that you don't drive I hate driving and try to avoid it all costs. Americans are extremely dependent on their automobiles, chances are if someone grew up in a suburb they've probably never even been on a bus. Dependence on anything can be a cause for concern, an autonomous vehicle would definitely be something that people come to rely on a little too much.
    You talked about programming ethics into a machine and I loved your comments, how are we supposed to program into machines what is right and wrong when we ourselves are fallible? Humans don't always know wrong from right so how can we depend on extensions of humans to understand this.

    ReplyDelete
  5. I totally respect that you don't drive I hate driving and try to avoid it all costs. Americans are extremely dependent on their automobiles, chances are if someone grew up in a suburb they've probably never even been on a bus. Dependence on anything can be a cause for concern, an autonomous vehicle would definitely be something that people come to rely on a little too much.
    You talked about programming ethics into a machine and I loved your comments, how are we supposed to program into machines what is right and wrong when we ourselves are fallible? Humans don't always know wrong from right so how can we depend on extensions of humans to understand this.

    ReplyDelete

Post a Comment