Ethics and Robotics with Dr Anderson

Dr. Michael Anderson, from the University of Hartford, researches the principles and practices that inform the ethical behaviour of autonomous systems. He is using our mobile manipulator TIAGo to undertake research involving machine ethics, and to develop and test some of his ethical theories.

Discover his work in this video from PAL Robotics:

Learning to trust Artificial Intelligence

“Artificial Intelligence might serve as a backup of human intelligence when Earth comes to its ultimate demise,” Dr. Anderson argues.

Technology might help us preserve something that has taken billions of years of natural design to achieve and may even turn out to be unique in the universe.

To garner trust in Artificial Intelligence, and thus permit its continued development, ethical values must be incorporated within artificially intelligent agents.

Incorporating ethical principles in limited domains

“We build principles, ethical principles that drive a robot’s behaviour.”

Dr. Anderson says that incorporation of such principles in fully autonomous robots functioning within an unconstrained world present a very complicated problem, and suggests that we begin our efforts in simpler domains.

Currently, robots are being constructed to take part in limited domains, such as taking care of the elderly or taking inventory inside stores. These contexts are composed of a limited collection of actions and ethical duties, making it easier for the robot to take decisions.

The role of machine learning

“When ethicists discuss certain specific cases there’s often agreement. We use those cases to learn what the principles are underneath that agreement.”

Dr. Anderson gives an example: “Imagine that a robot has to charge, and at the same time someone asks it to play ball, or deliver a medicine that would prevent a great deal of harm to a person.

Every action that the robot takes satisfies or violates a collection of ethical duties. The robot would need to consider: what are the duties involved?”

Dr. Anderson and his team use machine learning to take the cases in which ethicists agree, analyse how these duties are balanced and then use the resulting abstractions to decide the ethically correct action to take in each situation.

The prima facie duty approach to ethics

Dr Anderson’s method is far from Asimov’s “Three Laws of Robotics”.

“Although the Asimov’s Laws themselves may hold some validity, they are represented as a hierarchy where the first law is always the most important. Furthermore, it is not clear that the duties represented by these laws are complete.

We use instead what is called the prima facie duty approach to ethics, where there is no one duty that always overtakes any other, since there are situations in which another duty might take precedence. Any number of duties can be equally taken into consideration.”

Setting ethical limits

Where should ethical limits lie for robots? Dr. Anderson asserts that robots should only appear in situations where there is agreement on the ethics involved.

“You should not put a robot in a situation in which the ethics are not yet clear. If we don’t understand the ethics involved, a robot should not be there – the ethics should come first.”

If you liked this interview, don’t miss out the others with Raquel Ros on social robotics, with Enrico Mingo Hoffman on complex humanoid robots in real scenarios and with Séverin Lemaignan about social robots to foster stronger bonds.

More details of Dr. Anderson’s research can be found on our blog.

Don’t forget to drop us a line for any question you might have!