The genius entrepreneur Elon Musk along with 116 other founders of robotics and AI companiesrecently signed a petition to the UN asking them to take steps to limit weapons control by autonomous robots. On the other hand, Stephen Hawkins said that AI could spell doom for the human race. In an apprehensive scenario like this, we need to make clear about the reality and imagination.
Artificial Intelligent already surrounds us
Google or Amazon searches is actually an AI which learns and improve itself each time you give a search command. But strangely enough, it is called a “narrow” or “weak” AI because, it can only learn with human guidance, a serious limitation. However, “general” or “strong” AI which may not be a reality yet but will have more assertive ability to do things beyond human control or in other words will be able to learn itself and improvise. This 2nd generation AIs are creating this huge amount of anxiety among the Who’s who of science & technological world. The question is – are we really will be able to produce such AIs which can think, improvise and control their inventors if they want, someday?
There will be AI at some point who can “think”. They will gather information from the real world, process it and learn or improve their skill and may also be able to share these with other AIs. However, how a machine can actually experience an analog world is still a subject of much research. A machine cannot touch, smell, taste, see or feel anywhere close to any living being. The ability to judge the flight, direction, angle or force of a flying ball from the crack of a bat, feel the gravitational pull, to compare smoothness or roughness of a surface just by touching it are still outside of the territory of any AI developed so far or to be developed in near future is a big ask.
AI can be controlled but for how long?
The good news is that as long as AIs get their energy from power, cutting the supply will be able to stop them. But as humans are advancing on to use superconductor and more efficient photovoltaic or renewable energy sources, the locomotion control of the AIs could be out of our hands.
AI will supersede humans someday
Well, as it has been already observed in the complex neural system, our ability to anticipate the behaviour of another person or a living being with some amount of intelligence. In case of autonomous AI who is devoid of any ethics-based decision making this unpredictability could be unnerving. AIs are being programmed to achieve a specific task and rewarded for accomplishing it. This may give rise to self-interests among them. Another possibility is that AI will eliminate other AI following Darwin’s theory of natural selection. This could again bring unintended consequences as aggressive AIs would want to defend themselves at any cost. This may trigger an autonomous attack on other AIs. AIs, once they surpass the human intelligence, humans in their opinion could be an inefficient energy wasting beings or in worst case scenario they could see humans as competitors and rationally could try to eradicate us.
We are facing an uncertain future with AIs
The incompleteness theory propagated by Gödel proves that any complex system with axiomatic rules, the system is not self-consistent. As proposed by technical ethicists to incorporate programmes like “to do no harm to humans” into AIs is incomplete and self-contradictory in nature. The implication of the above-explained possibilities are not indicating towards a bright future so, we need to prepare ourselves and take necessary steps to shape the development of AIs into the right direction.