Imagine a world where our every need is catered to by an army of sentient machines— robots in charge of menial tasks such as cleaning and cooking, responsible for driving us safely to work, and growing our food more efficiently. These are just some of the potential uses of artificial intelligence, a field of computer science that has the goal of creating intelligent machines that are able to learn and act on their own. Some argue that as we develop more powerful artificial intelligence we will be able to tackle problems such as world poverty and hunger.
But not everyone is keen on these technologies, including renowned theoretical physicist Stephen Hawking and inventor/entrepreneur Elon Musk, who has gone as far as saying that “…with artificial intelligence we are summoning the demon”.
Video uploaded to YouTube by Kostas Keramaris
Critics of these technologies are not questioning the potential benefits, but are weighing them against their inherent risks on humanity as a whole. One of the biggest risks is referred to as technological singularity — a point at which artificial intelligence will exceed human capacity and control, which could potentially end civilization or humanity as we know it. In a nutshell, the machines would be able to learn at a rate beyond the limitations of human biology, and once outside the control of their creators, the machines’ behaviour may not be as intended or even harmful to mankind.
Doomsday scenarios aside, advances in AI technology will undoubtedly have other negative effects on society. A recent report by an American non-partisan think tank looked at how AI would affect the work force. Based on the opinions of over 1,900 experts, they believe that by the year 2025 AI and robotics will permeate every aspect of our lives, and foresee that an increase in automation will put downward pressure on blue collar jobs, and to some extent, white collar jobs.
Having AI take over everyday tasks such as driving will also have interesting ramifications on our legal frameworks – who bears the responsibility if a vehicle that is driven by an algorithm crashes and kills a human? Should the blame rest with the software engineer, or should we take a robot to court?
The potential benefits of AI research are clear — so how can we ensure that its risks are dealt with accordingly? The Future of Life Institute, a volunteer-run group, has been trying to address these issues for a number of years. They have recently put forth a proposal delineating some of the potential research that can be done to ensure AI remains beneficial and aligned with human interests. They are pushing to increase research on ways to make AI safer, and to better understand its effects on society. Their proposal has been backed by top AI researchers and academia – including Stephen Hawking and Elon Musk, who has also decided to donate $10 million USD to fund such research.
What’s your take on this issue? Do you cozy up with Siri every night and welcome the era of robotics with open arms? Or do you lie awake at night in fear of the robot uprising?
Written by Armando Dorantes Bollain y Goytia