What happens when cities become smarter than their citizens? Who will control them? That day may be coming, although probably not as soon as Ray Kurzweil‘s transhuman predictions suggest.
Fifty years ago science fiction author Isaac Asimov framed a preemptive answer to the question of how to manage smarter-than-us cities. He defined the often referred to but little understood three laws of robotics. Living and writing in a new world, post-war context, the idea of autonomous, cybernetic machines with the power to perform human tasks presented his generation an irresistible view of a science-enhanced future. Asimov built a career on the fictional power of these automatons, but as a scientist in real life, he realized that along with superhuman power came the potential for these machines to destroy their creators like so many real life Frankensteins.
In order to prevent the ultimate displacement of humans by their creations, Asimov came up with his three laws of robotics, later embellished with the “Zeroth”, base law. Here they are for our reference:
- A robot may not harm humanity, or, by inaction, allow humanity to come to harm (the zeroth law).
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Asimov’s intent was valid, but as has been documented by critics, any good lawyer could drive a starship through the laws’Â loopholes. Physicist and science fiction writer David Langford provides a modern take on the three laws in a post-Snowden world:
- A robot will not harm authorized Government personnel but will terminate intruders with extreme prejudice.
- A robot will obey the orders of authorized personnel except where such orders conflict with the Third Law.
- A robot will guard its own existence with lethal antipersonnel weaponry, because a robot is bloody expensive.
Even the best laws can be ambiguous. As Langford’s parody shows, sometimes means-and-ends thinking takes populations places they wish they hadn’t gone. But Asimov’s Laws provide a cross-culturally understood touch point as a reference. Any policy geeks worth their salt will agree that as we make cities smarter and arguably more self-aware, we will have to balance the needs for operational efficiency with the more messy and unpredictable needs of citizens in a democratic society. MESH Cities authors are optimistic that people will learn to balance the power of new, Internet of Things-based city technologies against the common law functionality of a civil society.
With that hope in mind, what might the three laws of a smart city be?
The first law probably shares Asimov’s general principle: “A smart city may not harm humanity, or, by inaction, allow humanity to come to harm.” After all, cities have always been places of refuge and safety for their citizens. People would not inhabit them if they weren’t. The first law for smart cities bakes that civil legacy into the urban operating system.
The second law would define a mission target acceptable to city taxpayers everywhere. Something like, “An intelligent city will use its computing power to increase the energy efficiency of day-to-day functions of traffic and waste management, power distribution, and sustainable planning and development. Tax savings generated by these new efficiencies will be used to increase employment and make the city more livable.”
And finally, the third law might offer this assurance: “Smart sensors and the data they collect will not be used to track individuals and/or arbitrarily control human behaviour for political purposes, or any actions in conflict with the first law.”
The wording of our smart city laws needs work. But the idea is right. In an increasingly information-driven world we have to prepare for the day when our creations are smarter, faster, and more long-lived than we are. Where to begin?