Driverless cars can be taught ethics – report | Smart Highways Magazine: Industry News

Driverless cars can be taught ethics – report

Share this story...Tweet about this on TwitterShare on LinkedInShare on FacebookShare on Google+Email this to someonePrint this pageBuffer this pagePin on PinterestShare on StumbleUponShare on Tumblr

A new study from is suggesting that the moral decisions humans make while driving are not as complex or context dependent as previously thought.

Futurism.com reports that The Institute of Cognitive Science at the University of Osnabrück research, which has been published in Frontiers in Behavioral Neuroscience, has found these decisions follow a fairly simple value-of-life-based model, which means programming autonomous vehicles to make ethical decisions should be relatively easy.

For the study, 105 participants were put in a virtual reality (VR) scenario during which they drove around suburbia on a foggy day. They then encountered unavoidable dilemmas that forced them to choose between hitting people, animals, and inanimate objects with their virtual car.

The report says the previous assumption was that these types of moral decisions were highly contextual and therefore beyond computational modeling. “But we found quite the opposite,” Leon Sütfeld, first author of the study, told Science Daily. “Human behaviour in dilemma situations can be modeled by a rather simple value-of-life-based model that is attributed by the participant to every human, animal, or inanimate object.”

“A lot of virtual ink has been spilt online concerning the benefits of driverless cars,” says the report. “Elon Musk is in the vanguard, stating emphatically that those who do not support the technology are “killing people.” His view is that the technology can be smarter, more impartial, and better at driving than humans, and thus able to save lives.

“Currently, however, the cars are large pieces of hardware supported by rudimentary driverless technology. The question of how many lives they could save is contingent upon how we choose to program them, and that’s where the results of this study come into play. If we expect driverless cars to be better than humans, why would we program them like human drivers?

“Just how safe driverless vehicles will be in the future is dependent on how we choose to program them, and while that task won’t be easy, knowing how we would react in various situations should help us along the way.”

 
Comments

No comments yet.