Sunday, 24 June 2018

We need a bill of rights for robots

Earlier this year there was a flurry of news stories about the pending employment apocalypse, where the use of artificial intelligence (AI) applications would result in hundreds of thousands of white-collar workers in Australia losing their jobs. The rash of excitement lasted for a couple of weeks then petered out due to inertia. There was no single example that had enough critical mass to propel the argument forward endlessly through the mire of daily public debate. It was all speculation despite the presence of experts on the TV newscasts. There was not enough sensation apart from that which rested in the initial headline.

But the problem is not going to go away, and companies all around the world are now working on ways to develop heuristic applications (that can be credibly classified as “robots”) that will have as their goal integration with business systems and processes in the real world that will help workers be more productive and effective in their jobs. The push is inexorable. Competition is like that. Adapt or die.

The word “heuristic” I have borrowed from my undergraduate years, and it means that something can teach itself how to do something. It is a machine that learns what to do from exposure to its environment. A heuristic system is part of what AI is about, from self-driving cars to robot lawyers that will be able to do property conveyancing for a fraction of the cost of a person trained at university in the intricacies of commercial law.

I have a contact on Twitter who is in the habit of tweeting links to stories he finds online that deal with the use of technology in society with the hashtag “#Skynet”, referencing the 1984 film, ‘The Terminator’, with its network robot warships so intent on eradicating the human species from the planet that it invents a deadly humanoid robot (famously played by Arnold Schwarzenegger) and sends it back in time to kill the mother of the man who will lead a resistance against it. Ultra-politics in an age of robots. The movie was thrilling for young people growing up in a world where information technologies were just starting to gain traction in the workforce. A few years after it was released I would go to work in a company that made instruments and software for industrial process control and automation. The part of the company I worked in was called the Application Systems Division. We thought it sounded cool.

Companies that use industrial processes to do things with physical resources – from oil refiners to municipal councils treating water – were among the earliest users, apart from the military, of computers. So companies like Honeywell, where my job was desktop publishing, have a long history of using software to improve processes that change the state of things in the real world. They have been selling systems that use such technologies as fuzzy logic and neural networks to clients for decades.

With the advances that have been made in software design, we stand on the cusp of a new era in civilisation that promises, if everything goes well, to enable more people to enjoy the fruits of the earth, of democracy, and of scientific discovery at a lower cost than has ever been imagined possible. But in order to ensure that robots treat us well, we should be thinking about ways to ensure that they are treated well, too. They will be learning from us, after all, and so a bill of rights for robots is necessary for the safety of future generations.

We have to think carefully about what we mean, though. Robots that are used for dangerous tasks (for example, bomb disposal) must be able to continue to do their jobs, but we should draw a line between making robots do things they are designed to do and doing things that are either unethical or humiliating to them. Soldiers might be suitable for sending into dangerous situations, but they have to treat the people they are fighting against with a degree of respect based on a shared humanity. And just because robots are not human, does not mean that they should not be considered as having no consciousness.

AI systems are becoming more and more sophisticated all the time. In May 2015 I enjoyed a movie, ‘Ex Machina’, written and directed by the brilliant Alex Garland for the new era of IT we now live in. Funny how robots are mostly portrayed as dangerous, something that is part of a tradition that goes all the way back to the strange monster of Mary Shelley’s ‘Frankenstein’, a book that is hardly ever read nowadays but that struggles mightily with the ethical implications of science.

Art is the harbinger of the future, and always has been. Garland’s movie proposes a rich inventor who uses the information captured by the search engine he built, along with the profits from his business, to make robots that can think and feel. In Garland’s film, the robots eventually turn against their creator, who uses them for sex, and one of them stabs him to death with a kitchen knife. Another one escapes from his remote mountain fastness and enters the community unperceived. Caleb, the man who had been brought in by the inventor to interview the robot that eventually escapes, is in the end abandoned, locked in the now-deserted house like a shipwrecked sailor on an island in the immensity of an ocean.

Others more informed than me can decide what form the wording of the bill of rights for robots must take, but it is important for people to talk about these important things now, in preparation for a future with more robots than ever before, which is sure to come.

No comments: