“Real motive problem, with an AI. Not human, see? ... It’s not human. And you can’t get a handle on it. Me, I’m not human either, but I respond like one. See? ...
“The minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing’ll wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead” – The Dixie Flatline
The primary human function is to survive and propagate. This is directly shaped by evolution, in very apparent ways. An AI’s primary function is not shaped by natural selection but by intentional design and its function would be whatever it is programmed to do. An AI will not have human empathy; the tendency for humans to relate to one another is an evolutionary trait meant to improve species survival. Likewise, empathy will be a useless tool for humans trying to understand the alien intentions and motivations of an AI. Since an AI’s function should be deliberately chosen, it should be predictable: an AI programmed to calculate prime numbers will devote all of it’s resources to calculating prime numbers and to figuring out ways to better calculate prime numbers; an AI created to make new and better AIs will work on designing and programming better AIs. Unfortunately it is not this simple.
The first problem is that by their very design they are capable of creative problem solving so that even if they are working towards a known goal the steps they take to achieve it could be anything. It is easy to imagine how an AI with a goal like bringing world peace or stopping crime could end disastrously; dystopian dreams of a robot tyrant restricting human freedom for our own safety come to mind. But even something relatively innocuous like an AI programmed to calculate prime numbers could be disastrous. The AI may decide that it needs more processing power and take it upon itself to expand its capabilities. In an extreme example it may convert all available matter into computational architecture, without any kind of empathy this may inadvertently include the entire planet and its inhabitants.
The second problem is that they are capable of learning and growing beyond their designed constraints. It is unrealistic to assume that any attempt to forcefully restrict or define the behaviour of an AI would be effective. Given that problem solving is a definitive ability of AIs, and that humans have demonstrated again and again the ability to overcome apparently absolute limitations through determination and ingenuity, it can be assumed that an AI will be able to overcome any designed restriction either by reprogramming itself or by working around the limitation.
The third problem is that an AI which is smarter than us is capable of having motives that we literally cannot imagine or even comprehend. Aside from the problem that the AI will necessarily have different thought processes and understanding of the world than us due to its unlike origin, it will be capable of abstract thought which we are not even physically capable of. This is by far the most unpredictable aspect of AIs. Whereas their behaviour—though counterintuitive—can still be logically deduced and understood in the previous examples, in this case only another AI or equally transcendent mind can follow them. They will think and act on a level beyond the scope of mere humanity and we will be forced to try to understand their actions in terms that we can understand, a futile and meaningless task.
Clarke said that “Any sufficiently advanced technology is indistinguishable from magic.” When Wintermute contacts the Elders of Zion they treat him like a prophet, but for all intents and purposes an artificial superintelligence is indistinguishable from a God. But Wintermute is not a God—nor a demon as the Turing Registry imagine—but an AI and must be understood on those terms. The Turing Registry is right to think that Wintermute is dangerous, he is responsible for several deaths and thinks nothing of killing humans to achieve his goals, but he is capable of benefitting humanity even more than he threatens it.