Volume 2, Issue 1
1st Quarter, 2007


Artificial Moral Agents (AMAs): Prospects and Approaches for Building Computer Systems and Robots Capable of Making Moral Decisions

Wendell Wallach

This article was adapted from a lecture given by Wendell Wallach at the 2nd Annual Colloquium on the Law of Transbeman Persons, on December 10th, 2006 at the Space Coast Retreat of Terasem Movement, Inc., Melbourne Beach, FL.

Wendell Wallach, of WW Associates, and affiliated with the Yale Interdisciplinary Center for Bioethics, in recognition that systems are becoming more and more autonomous, employs the term “Machine Morality” in his discussion of the prospects for implementing moral decision-making faculties in artificial agents.

I 've been engaged for the last two years in helping usher in a new field of inquiry that I like to refer to as Machine Morality, mainly because I like the alliteration. It also has a number of other names including Machine Ethics, Artificial Morality and Computational Ethics. This field of inquiry is not about house training our robopets, nor is it about saving the world from the governator and his future minions of robots, but it is about the prospects for implementing moral decision-making faculties in artificial agents. This challenge has become necessitated by the simple fact that our computer systems are getting more and more autonomous; making decisions that can affect us for good or bad, and in many situations the engineers who design the systems can't necessarily predict what the actions the systems they've built will perform.

I find this subject particular interest because it forces us to think deeply about how we humans make decisions, and also perhaps, think about what may distinguish us from the artificial minds and artificial beings that we are in the process of creating. There are four basic questions that animate this field.

questions Image 1: Four Basic Questions

In response to the first question, I would say we're probably going to need artificial moral agents sooner than later. We may be just on the cusp of the first crisis caused by some machine doing something that we did not anticipate would happen. In fact, already, in 1997 we had an Asian Contagion, a potential meltdown in world financial markets exacerbated by automated computer trading. Even a year later the repercussions of this event contributed in instability in the Brazilian economy.

We need to begin addressing the need for computer systems cognizant of the moral ramifications of actions they might take.

Whose morality or what morality will we implement in our artificial system? Which kinds of morality we are going to implement in artificial agents? How can we make ethics computable?

What role should ethical theory play in the control architecture for artificial agents? There are two fundamental approaches. One approach is the top down imposition of ethical theories, and the other entails more bottom up approaches where we create systems that have goals, objectives, standards that they aim to try and meet or learn about, but the standards don't necessarily specify the control architecture in and of themselves.

The top down approaches: Here we're talking about what's classically encompassed in moral philosophy. There are two large tenets, approaches within which most of our ethics fall. These two contenders are utilitarianism (consequentialism), set in motion in the 19th Century by Bentham and Mill;[1] and the duty-based or rule-based (deontological ethical systems),[2] which include everything from the Ten Commandments to Asimov's Three Laws for robots, as well as other sets of laws and rules that you might want to implement in your moral code.

top down approaches
Image 2: Top-Down Theories

The deontological systems notoriously have problems with the conflicts between the rules or the duties or the laws, as well as issues on how you might prioritize them when they do conflict. There have been a few attempts, most notably that of Immanuel Kant and his categorical comparative to reduce all rules to one principle. Another uber-rule would be the Golden Rule. The computational issues would be what happens when you think about implementing these within artificial systems.  They are all quite problematic when you think through any of these ethical systems and what is entailed in the implementation of them.

Perhaps the biggest issue is that they all suffer from versions of the frame problem, ways of limiting the computational load in having systems actually make their decisions, because you have all kinds of secondary issues, including the knowledge of human behavior and psychology that the systems need; their knowledge of the effects of their actions in the world; and their ability to estimate the sufficiency of the initial information they're working with. Where we humans are particularly good at dealing with situations in which our information is incomplete, computational systems aren’t, although considerable work is directed at designing software that can handle this kind of incomplete and fuzzy information.

Next Page

Footnotes

1. Bentham, Jeremy (1748-1832) English philosopher and political radical. In A Fragment on Government (1776) {at Amazon.com} and An Introduction to the Principles of Morals and Legislation (1789) {at Amazon.com} Bentham outlined an ethical system based on a purely hedonistic calculation of the utility particular actions with a view to the greatest happiness of all, a view later to be defended in modified form by Mill and others.
www.philosophypages.com February 9, 2007 3:46 pm EST

John Stuart Mill (1806-1873) The son of James Mill, a friend and follower of Jeremy Bentham, John Stuart Mill was subjected to a rigorous education at home: he mastered English and the classical languages as a child, studied logic and philosophy extensively, read the law with John Austin, and then embarked on a thirty-five career with the British East India Company at the age of seventeen. www.philosophypages.com February 9, 2007 3:48 pm EST

2. Deontological, or duty-based, ethical systems, on the other hand, are those that simply claim, directly and simply, what the fundamental ethical duties are. The Ten Commandments (from Exodus and Deuteronomy in the Hebrew Torah) would be examples of deontological ethical thinking. www.bioethicscourse.info February 9, 2007 3:51 pm EST

 

1 2 3 4 5 6 next page>