Reflections on humanoids: An alternative view

Sophia the robot addresses delegates at the 5th Transform Africa Summit in Kigali, last month. / File

I enjoyed Gatete Nyiringabo’s recent reflections on humanoids and the downsides to the potential A.I revolution.

It made for interesting reading, even though I disagreed with much of it.

Gatete’s views on the dangers of our robotic future are quite common and pessimism on this subject has always been the dominant narrative.

The idea that the machines will turn on us is something that’s been widely reflected in fiction from Terminator to Black Mirror and countless other works of fiction.

That these depictions have influenced our view on the future seems incontestable to me but of course that’s not the whole story.

Let’s divide the discussion into two scenarios- one where robots are totally under our control and remain within their programming indefinitely and another scenario where the robots overcome their programming and develop their own ‘consciousness’ and take control of their own destinies.

However, as I will discuss further, in both cases the human factor is critically important. I’ll elaborate on this with the disclaimer that I am not an I.T or robotics expert and will warmly welcome any dissenting expert voices.

I’ll also use ‘machines’, ‘robots’ and ‘A.I’ interchangeably although I am sure this will not be welcomed by the purists.

For the first scenario where the machines we create remain within our control it is obvious that in some cases they will be used for less than optimal means. Indeed that is already happening now (the death toll from American drones tells its own story or the stories about the biases that are built into some algorithms).

However fundamentally it feels to me that is a human problem and it is our human desires manifesting themselves through technology. The only difference in this case could be the sheer scale of destruction that could take place if this hypothetical Terminator-style world would occur, but even that is questionable in a world, which had had thousands of nuclear weapons for decades.

A world in which we harm each other using technology under our control is simply a continuation of the course of our historical behaviour through different means.

The second scenario envisages a lack of control- the machines transcending their programming and going their own way without our ability to roll-back those changes.

It’s quite fascinating to me what our fear for robots and Artificial intelligence in general says about ourselves. Whether consciously or not, when we worry about robots deciding to ‘take power’ or kill us or control us, what we are really saying is ‘what if the robots become like us’?.

Every argument I have read about the potentially dangerous effects of the ‘machine revolution’ highlights behavior that human beings have demonstrated from time immemorial. In this case however, we won’t have the power and instead something non-human will.

Power without humanity becomes the most terrifying vision because we have built in so many positive values with our understanding of the word ‘humanity.

However, one common theme in human history has been that power has rarely been a neutral concept, and when it comes up against our understanding of what humanity should be, there is usually one winner.

Basically, both cases are linked by the fact that whether under our control or not, A.I will basically be acting like us under the pessimistic scenario. However the second scenario also shows the limits of our understanding of the world because it seems a bit strange to imagine artificial intelligence having the same worldly concerns as us unless programmed to do so.

Would they care about the geographical borders that we erected around ourselves and how much those borders are critical to our identities? Will they fix their identity to a god or care about life after death?

Would they have strong views on system of government and political parties? Would they be jealous or bitter or resentful? Most of our sources of conflict are very specifically tied to our human concerns.

Ultimately the fact that we can’t easily conceive of a way of seeing the world that isn’t a human way is a flaw but also so very human.

The views expressed in this article are of the author.

ADVERTISEMENT