09 March 2007

Robot Ethics?!

In his ground-breaking 1950 Mind article, “Computing Machinery and Intelligence,” Alan Turing made the following famous prediction:

I believe that in about fifty years’ time, it will be possible to programme computers ... to make them play the imitation game so well that an average interrogator will not have more than 70 percent chance of making the right identification after five minutes of questioning (p. 442).

(Aside: a nice article, entitled “Turing Test: 50 Years Later” can be found here [pdf].)

When I taught the "imitation game" -- more commonly known today as the Turing Test -- in my intro seminar last week, we all smirked a bit at Turing’s over-confidence. Not only has no machine passed the test to date, nothing has even come close. In fact, even the Loebner Prize -- an annual competition in which machines attempt to pass the Turing Test -- is awarded to those machines which fail least badly (i.e. to those machines which are diagnosed as non-human least quickly).

So today’s news from the BBC comes as something of a surprise.

The Ethical Dilemmas of Robotics
BBC News – Technology
Friday, 9 March 2007


Scientists are already beginning to think seriously about the new ethical problems posed by current developments in robotics.

This week, experts in South Korea said they were drawing up an ethical code to prevent humans abusing robots, and vice versa. And, a group of leading roboticists called the European Robotics Network (Euron) has even started lobbying governments for legislation.

As these robots become more intelligent, it will become harder to decide who is responsible if they injure someone. Is the designer to blame, or the user, or the robot itself?

More intelligent? The moral of the Loebner Prize is that they’re not intelligent in the first place ... at all!

Not yet anyway ...

No comments: