Artificial Intelligence: Striking the right balance between caution and innovation

Sometime in the very near future, self-driving cars – currently being tested by the likes of Google – will be on our roads.I am a dreamer. And, barring the cost of the vehicle when it lands in Nairobi or Kigali, I can already see myself alone in the back-left seat of the car on my way home from work reading a good ibook or catching up with the news on my tablet – the tablet on which I will have programmed the route the vehicle is to autonomously take to beat the jam. What could possibly go wrong?

Friday, January 30, 2015

Sometime in the very near future, self-driving cars – currently being tested by the likes of Google – will be on our roads.

I am a dreamer. And, barring the cost of the vehicle when it lands in Nairobi or Kigali, I can already see myself alone in the back-left seat of the car on my way home from work reading a good ibook or catching up with the news on my tablet – the tablet on which I will have programmed the route the vehicle is to autonomously take to beat the jam. What could possibly go wrong?

Already, this car is not only "aware” of where it is on the road in relation to the environment through space-based satellites – the Global Positioning System (GPS), but will be in constant communication with other vehicles to avoid collision. Its sensors will also be on constant look-out and jam on the breaks to avoid hitting a reckless pedestrian or cyclist on the road. Their efficiency is such that you can even doze on your seat and still find yourself parked right at your door step.

And, since they will be like today’s highly computerized cars, if they sense anything wrong with the vehicle they will not start, let alone move.

But, let’s say you are on the road and something happens – a freak accident, say, in which your car dents the bumper of another vehicle.

Let’s assume, in this case, your car is clearly in the wrong. Remember that you are only a passenger in your own vehicle, assured of the efficacy of it its high-tech sensors.

Who should the incorruptible Rwanda Traffic Police blame for the accident: The owner for his faith in the technology? The dealer who sold the vehicle, the manufacturer, or the developer of the software guiding the vehicle? Blame all of them?

This is one of the ethical conundrums facing artificial intelligence (AI) caught in this year’s Global Risks Report by the World Economic Forum.

AI scientists, such as those developing the autonomous car, are building systems that can solve reasoning tasks, learn from data, make decisions and plans, become personal assistants (i.e., Apple’s Siri), play games, manipulate objects, respond to queries expressed in human languages, translate between languages (i.e., Google’s Translate), and much more.

The report observes that autonomous vehicles and other cases of human-robot interaction demand legal solutions fit for the novel combination of automatic decision-making with a capacity for physical harm.

Autonomous vehicles will encounter situations where they must weigh the risks of injury to passengers against the risks to pedestrians; what will the legal redress be for parties who believe the vehicle decided wrongly?

Take another scenario, also in the report: Several nations are working towards the development of lethal autonomous weapons systems that can assess information, choose targets and open fire without human intervention.

Such developments raise new challenges for international law and the protection of noncombatants. Who will be accountable if they violate international law? The Geneva Conventions are unclear. It is also not clear when human intervention occurs: before deployment, during deployment?

Humans will be involved in programming autonomous weapons; the question is whether human control of the weapon ceases at the moment of deployment.

This presents a moral and legal dilemma, specifically, when – and where – to strike a balance between precaution and innovation.

"The timing issue is that decisions need to be taken today for technologies that have a highly uncertain future path, the consequences of which will be visible only in the long term.

Regulate too heavily at an early stage and a technology may thus fail to develop; adopt a laissez-faire approach for too long, and rapid developments may have irrevocable consequences.

Different kinds of regulatory oversight may be needed at different stages: when the scientific research is being conducted, when the technology is being developed, and when the technology is being applied.”

The regulatory framework needs to be tackled soon, not only globally, but also nationally. Otherwise, in my vanity, it may be a while before I get to ride in that autonomous car.

The writer is a commentator on local and regional issues.