Will AI Serve Humanity Or Harm It?
The recent back and forth between Elon Musk and Mark Zuckerberg on the subject of Artificial Intelligence got me thinking. What does Elon Musk understand about AI that Mark Zuckerberg doesn’t?
Advancements in AI
Advancements in machine learning and AI have been coming at an accelerated pace. It looks like there is no end to the possibilities. From this comes serious questions. Will AI serve humanity or eventually harm it? Is AI something to be feared?
It seems that we are only beginning to scratch the surface of the benefits to humanity that AI can deliver. One school of thought thinks if we give robots the ability to be human like, to think and to learn on its own that this could benefit humanity. On the other hand, one needs to keep in mind that humans are flawed. Some have turned into despicable murderous characters. They learned whatever they learned and experienced whatever they experienced to get them to a place. A place where they demonstrate no respect for human life. So couldn’t this be true for future robots who are allowed to develop and learn from their experiences? Could they become self-aware and desire? Perhaps to be the boss or eventually the dominant species?
In 2015 this concern was raised by thousands of scientists and technologists in an open letter titled, “AUTONOMOUS WEAPONS: AN OPEN LETTER FROM AI and ROBOTICS RESEARCHERS”. As the title suggests the letter was regarding the growing concern over autonomous weapons. These are weapons that have no human counterparts and would make targeting decisions on their own. Shouldn’t human thought and conscious be behind the decision to pull the trigger or not?
The letter stated, “There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.”
The other side of the argument is that autonomous weapons will make the battlefield safer and reduce casualties.
Weapons controlled by AI will likely cause a new arms race. No one wants to be on the short end of this big stick. Once such weapons become common, it won’t be long before they are for sale in the Black Market. Available to the highest bidder.
An article from Popular Science tells us of, “A new study by Arizona State University Global Security Initiative, supported by funding from Elon Musk’s Future of Life Institute, looks at autonomous systems that already exist in weapons, creating a baseline for how we understand the deadly decision by machines in the future.”
The letter was signed by many experts in the field of Artificial Intelligence and technologists. The list of signatures includes Stephen Hawking, Elon Musk, and Steve Wozniak.
The fact that Elon Musk signed this letter two years ago gives him credibility on this subject. The recent back and forth between Elon Musk and Facebook’s CEO Mark Zuckerberg shows me that Elon Musk has obviously given this topic a lot of thought. Thought that goes beyond the development of commercial products and services that utilize AI.
Autonomous tech is coming sooner than we think. I can only hope that the AI technologies in our future will mind Issac Asimov’s Three Laws Of Robotics.
The first law—A robot may not injure a human being or, through inaction, allow a human being to come to harm.
The second law—A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
The third law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.