Are these relevant today?
The best part of summer is getting time to read. This year I took the opportunity to catch up on the classic, “I, Robot” by Isaac Asimov. Over 75 years from when his laws of robotics were first written I couldn’t help wondering if they are relevant today, what other laws may be relevant given what we know now, and whether universal laws like this would even be possible.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws. (More from Wikipedia)
So to my original question, are these relevant today? I think it’s safe to say the first rule is too simplistic to be actionable. This rule made sense when a robot was a physical machine, that interacted with humans in some way, but robots today are much more complex and are both physical and digital. What if the action a robot took today led to the harm of a human, without causing it directly? This is a much more likely scenario considering the military’s use already now of drones and robotics. There is also the element of psychological and emotional harm. What about people whose jobs are made redundant through automation, who have no other immediately employable skill? Does this count as harm? It’s a known inevitability today because we as humans choose automation over human employees for various reasons. Who then causes the harm? You can’t really blame the human or the robot alone. We’ve moved beyond the first law, and it’s no longer possible to expect it to be true.
What if the action a robot took today led to the harm of a human, without causing it directly?
The second law is also too simplistic for where we’ve come today. Considering the advances in AI, shouldn’t we want robots to know better based on more data and insight, and so they should disagree with our orders when there is a better option. There should be protocols built in for how to make decisions and what to prioritize, but we are way past a time where robots take orders. The recent success of an AI beating top doctors at diagnosing brain tumors is just one of many examples that validate this. We need to allow them to decide over us when it’s in the robot’s function’s best interest. (More on that below)
How can we design rights that protect beings with those abilities without having a human level comparison?
Coming to the third law, I think we need to open up the already heated debate about rights for non-human beings. At what point of awareness and consciousness does a being have the right to existence, to protect itself and be protected? With the Turing test and similar cases, the benchmark has been comparing it to how accurately it portrays believable human behavior and intelligence. Robots aren’t human, however, so why are we the benchmark? Animals have rights because they are aware, have thoughts and feelings. How can we design rights that protect beings with those abilities without having a human level comparison? What is the absolute proof of awareness and sentience, and how can we design policy to protect those being’s right to exist? I don’t have an answer but am glad people smarter than I are out there spending a lot of time and effort trying to figure this out.
What other laws may be relevant
The first reaction when seeing a need for laws or regulation of new technology has often been to control and slow it down. A good example of this was the 3 km/hr speed laws in UK cities with the introduction of automotive vehicles and included a man with a flag and lantern to walk in front of the car. But faster cars were built, and the law didn’t make sense anymore. The objective, missed at first, wasn’t to slow cars down, but to ensure ways of operating that reduce the risk to people and property.
I’m not sure laws here are possible, in the sense of the way Asimov considered them.
I’m not sure laws here are possible, in the sense of the way Asimov considered them. Instead, I propose a thought formula tailored to the function of the robot:
(A = Value of (robot’s) self x B = value of other beings x C = value of property and environment) / D = value of function
(A x B x C) / D = Decision and an action
The value of the robot’s function has to be a defining factor in the rules and protocols it’s designed to follow. For example, a demolition robot cannot protect property and still succeed in its function. A military robot designed to target threats (being or property) cannot protect beings and/or property and succeed in its function. It has to be a big picture evaluation, considering all the variables possible to understand within the scope of the function.
A perspective we need to add here is that when we think about rules and laws for robotics we’re holding robots to a higher standard than we are ourselves. The insurance, energy, transportation, and other industries all have a calculated financial value for the death of a human. Here’s an old but interesting blog just to give you an idea. This means businesses will do something they know may kill someone if the financial benefit is believed to be worth the risk of having to pay out the financial value of the lives lost. This speaks to factors A, B, C and D. There are already calculations of these values. We use them in business today, so I think transparency becomes more reasonable to expect, that laws both from robots and ourselves.
However, if you add up all of the hours self-driving cars have accumulated so far, and compare the safety statistics to human driving, it’s undeniably better to have them than not.
At the same time, this formula doesn’t have to be a sad indicator. Sometimes it adds up to something better. When Uber’s self-driving car killed a woman earlier this year, there were questions about whether self-driving cars as a technology should go ahead at all. However, if you add up all of the hours self-driving cars have accumulated so far, and compare the safety statistics to human driving, it’s undeniably better to have them than not. And they are only just being developed, they will learn and improve constantly whereas human drivers stay at a fairly standard skill level and can get lazy or complacent over time. We cannot assume the robots we build will be as prone to error as we are. They may be prone to other errors and understanding and designing for those should be the priority.
Are universal laws possible
No, I doubt it, and I don’t think this comes as a surprise to anyone. Universal laws relating to technology development haven’t worked in the past and are inconsistent at best when it comes to operating existing technology. The important thing is to keep a global dialogue going around ethics relating to the use of robots, and, as they get closer to awareness and increased intelligence, the rights they as beings should be awarded and consequences they should face depending on where the technology goes, we take it, and ultimately it takes itself.