Back to the list

Is Artificial Intelligence So Dangerous? 


1.A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2.A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

Three Laws of Robotics, Isaac Asimov.

Before these 3 Laws were devised, robots were turning on their creators in science fiction books. Isaac Asimov was one of the first who showed them as friends of people, not destroyers. Nonetheless, all these stories about robots and smart technologies seemed so far away in XX century have become real in our modern world. Nowadays they save our time by cleaning houses, help to do menial tasks, demine, play ping pong and even become our partners! But will these three laws always be obeyed by robots?

In 2016 Alexander Reben created a robot which defies the first law. But this robot doesn’t always hurt you, it makes decisions in a way that you cannot predict. It seems just like another experiment with robots, even useless, but Reben tries to warn us about a threat from artificial intelligence.

In the end of 2016 Chinese robot Little Chubby constructed to educate children aged 4-12 got out of hand at technology exhibition and smashed a glass display case. Despite the fact that the reason of such robot’s behavior was a mistake made by an operator, we still are not sure about peaceful coexistence.

Stephen Hawking, Bill Gates, Elon Musk warned that artificial intelligence could be a great disaster in human civilization. But this is the only case if companies continue chasing money or start inventing artificial intelligence without due caution. Unchecked AI could eradicate humanity in the future. But some risks are already here. Smart systems become involved in different arenas ranging from healthcare to criminal justice and there is a danger that main parts of our lives are being made without adequate inspection. We are getting closer to the time when robots will choose to harm a human being or not to harm. Automobile companies plan to mass produce autonomous cars within five years. However, it can lead to a situation when a self-driving vehicle may soon need to decide whether to crash the car into a tree and risk hurting the driver or hit a group of pedestrians.

But let’s look at the positive site. Intelligent machines can be employed to do menial and dangerous tasks, and they don’t require sleep or breaks. Intelligent machines don’t make mistakes if they are programmed properly, of course. AI-based technologies help in achieving better outcomes through improved prediction, which can include medical diagnosis, oil exploration and demand forecasting.

As everything in this world, it has advantages and disadvantages. But the fact remains that AI is not that smart and safe now. We still have time to carefully consider all possible risks, do our best to make some improvements and prevent humanity from danger.