Do we want machines making moral decisions?

What are you reading these days? I’m slowly turning the pages of Moral Machines: Teaching Robots Right From Wrongby Wendell Wallach & Colin Allen (2009). An excerpt to share:

“Does humanity really want computers making morally important decisions? Many philosophers of technology have warned about humans abdicating responsibility to machines. Movies and magazines are filled with futuristic fantasies regarding the dangers of advanced forms of artificial intelligence. Emerging technologies are always easier to modify before they become entrenched. However it is not often possible to predict accurately the impact of a new technology on society until well after it has been widely adopted. Some critics think, therefore, that we should err on the side of caution, and relinquish the development of potentially dangerous technologies. We believe, however, that market and political forces will prevail and will demand the benefits that these technologies can provide. Thus, it is incumbent upon anyone with a stake in this technology to address head-on the task of implementing moral decision making in computers, robots and virtual bots within computer networks.”

Eeeeeek! Introducing the emerging (and rapidly expanding) field of robot ethics, Wallach & Allen convincingly argue that as robots take on more and more responsibility, they must be programmed with moral responsibility and moral decision-making abilities. The authors think that even if moral agency for machines is a long way away, it is necessary to start building a functional kind of morality in which artificial moral agents possess basic ethical sensitivity (as robots are already engaged in high-risk situations, such as the Predator drones and the more heavily armed Reaper drones now flying in Pakistan).

Yes, we need to examine, design and create more socially engaged robots and machines that are capable of telling right from wrong. However, if today’s ethical theories and human values are not adequate for living well in the world, then there will be subsequent challenges building artificial moral agents to think and act virtuously. For I believe the problem is not with our technology, the problem is with the people using/designing technology.

Despite all of the remarkable achievements of a technologically advanced society, humans are still a conflicting mix of genius/stupidity; love/self-hatred; peace/anger; wealth/poverty; modesty/narcissism; desire/delusion… I have yet to meet someone who has not suffered, who has no problems nor self-destructing habits, who has no worries. Historically speaking, religion has offered The Way, The Truth and The Light for contending with the evils of the human race, the problems of human suffering, and human death. Technology is now beginning to realize the dreams of theology, and I find this spiritually unnerving…

Can we build intelligent machines with a morality that surpasses our flawed human morality? If human-autonomy for robots is possible, should it be allowed? Else, do we want our robots to be forever relegated to a slave morality such that they will never make choices that are harmful to humanity nor threaten human dominion over the world?


Leave a Reply

Your email address will not be published. Required fields are marked *