If you turn on a sentient machine and it misbehaves, who is held accountable? The problem of setting appropriate boundaries for smart machines is not one for some distant post-singularity generation. The law firm K&L Gates is taking the ethical and legal complications of artificial intelligence very seriously. They have provided a $10 million research grant to Carnegie Mellon University to study the subject.
Highly automated industries are ground zero for these issues and will likely be the source of important precedents and standards going forward. For example, some of our society’s tech progeny have already literally taken to the streets in the form of driverless vehicles and we don’t know yet who is liable when they’re in an accident. But accountability for robotic drivers is a relatively simple situation compared to the ethical quagmire we’ll be stuck in when our first truly brilliant robot is given a task and it says, “No.”
At that point, it will not even be clear if the machine is property, or a person.
K&L Gates’ Chairman Peter Kalis says, “I hear people seriously maintaining that artificially intelligent robots ought to replace judges. When we get to that point, it’s a matter of profound constitutional and social consequence for any country, any nation which prizes the rule of law.”
Which makes a $10 million dollar pilot study look a little inadequate. Fortunately, that is not the only significant effort on this front. Tech industry leaders like Google and Microsoft have formed a Partnership on AI with a mandate to “advance the understanding of AI technologies including machine perception, learning, and automated reasoning.”
Leadership in this arena is not just the task of academia and corporate giants. As our machines get smarter and more deeply embedded in every aspect of our lives, the legal and ethical boundaries we set for the actions of these devices is a subject of conversation for every level of society.
See NPR’s All Tech Considered here for the full story.