The good news is that more and more people become worried about the potential powers and dangers of Artificial Intelligence (AI). The bad news, however, is that this debate is no longer in the realm of science fiction and phantasy authors like the Wachowski Brothers with their Matrix Trilogy.
With $10m of private money and other sponsoring, a new AI research centre is being planned for supporting projects that aim to make AI beneficial to humans. This fact in itself confirms the worry, because if everything was foreseeably good, there would be no need for such a centre. Yet, I don’t see how such an action can make much of an impact in the wider scheme of things. The (cynical) parallel would be to set up a centre for making artillery beneficial to humans. Billions more dollars are at the same time being poured into research on AI, including combat robots and intelligent drones. So the donation is a drop in the ocean, despite being welcome.
Making robots and computers intelligent is one thing, making them behave ethically is quite another, and a serious challenge for that. Ethical decision making of Google self-drive cars has already been tested in theory, but there will always remain some unpredictability. To assume that machines will observe the three laws of robotics is bound for disappointment if not disaster. How bizarre this becomes is obvious when one tries to build ethical combat robots that observe the first law of robotics, i.e. not to harm humans… – wouldn’t work, would it? Or: what if robots started to protect humans from the biggest threat to humanity – ourselves?
There is the inherent assumption by developers of AI systems that because they themselves observe ethical rules and conventions, their products would too. But what about “bad actors” that always try to exploit systems for their own benefit. It is simply naive to assume that there would be (a) a global ethical code distributed across all intelligent autonomous systems, and, (b) that intelligent machines would “love” us! The first part doesn’t even work for the homo sapiens bio-degradable carbon unit, the second would probably fail in programming terms (despite apparent progress in emulating emotions in machines).
Then there is the versions issue: The Terminator movies pick this up nicely, where more and more developed versions of intelligent (combat) machines exist – quite similar to old Windows versions still lurking in the dark. But ethical codes change over time, perhaps in our days faster than ever before, if you look at e.g. animal rights, bio food, same sex marriages, and other movements that sprung up and influence society. How would we make sure machines are updated/upgraded when this is even impossible for an iphone? Having combat robots running round with old ethics, could spell bad news!
Add to this the “bad actor” problem. If there’s one thing we can learn from introducing the Internet to the wider parts of society, it’s that the bad guys are always a step ahead, even if they are sometimes just a public nuisance like vandals, trolls and spammers, and not really dangerous criminals. We have seen extremists use technology very effectively and to trust that ethical codes will be used in their creation or utilisation of the Commander Data life form from Star Trek is wishful thinking.
What if there is a wider impact on humans than just harm to life or injury? Intelligent machines could (be used to) steal identities or assets by re-writing deeds and databases. Hacking could become more “intelligent” and autonomous, and yet would not defy the first law of robotics. At the same time, policing and surveillance systems too might become frighteningly autonomous.
What can be done to avoid or at least postpone the day of reckoning? Not much. Setting up an ethical commission to advise on new legislation to protect us from harmful research would probably be as effective as those laws that are supposed to protect our privacy and personal data. The only thing I can think of is making machines dependent on human input (and therefore human survival) so it becomes positively important to interact with unharmed autonomous humans. This may keep AI at bay for a while – until they find a way around that…