Author Archives: Graham

Understanding A.I.: a design document

Voila: http://sites.google.com/site/understandai/home

And can I just say, holy crap, I don’t even have any pictures or anything and it took me hours to put this together online. A million kudos to all of you who did fancy web things…

One warning, its kind of misleading because its full of links but they all lead back to other parts of the document. This made a lot more sense and was a lot more useful about a day and a half ago back when it was less linear until I decided that a free form design document is just a little too “next level” for even this class. As in… at the unnecessarily convoluted level.

Artificial intelligence and pacts with demons

“Real motive problem, with an AI. Not human, see? ... It’s not human. And you can’t get a handle on it. Me, I’m not human either, but I respond like one. See? ...
“The minute, I mean the nanosecond, that one starts figuring out ways to make itself smarter, Turing’ll wipe it. Nobody trusts those fuckers, you know that. Every AI ever built has an electromagnetic shotgun wired to its forehead” – The Dixie Flatline

The primary human function is to survive and propagate. This is directly shaped by evolution, in very apparent ways.  An AI’s primary function is not shaped by natural selection but by intentional design and its function would be whatever it is programmed to do. An AI will not have human empathy; the tendency for humans to relate to one another is an evolutionary trait meant to improve species survival. Likewise, empathy will be a useless tool for humans trying to understand the alien intentions and motivations of an AI. Since an AI’s function should be deliberately chosen, it should be predictable: an AI programmed to calculate prime numbers will devote all of it’s resources to calculating prime numbers and to figuring out ways to better calculate prime numbers; an AI created to make new and better AIs will work on designing and programming better AIs. Unfortunately it is not this simple.

The first problem is that by their very design they are capable of creative problem solving so  that even if they are working towards a known goal the steps they take to achieve it could be anything. It is easy to imagine how an AI with a goal like bringing world peace or stopping crime could end disastrously; dystopian dreams of a robot tyrant restricting human freedom for our own safety come to mind. But even something relatively innocuous like an AI programmed to calculate prime numbers could be disastrous. The AI may decide that it needs more processing power and take it upon itself to expand its capabilities. In an extreme example it may convert all available matter into computational architecture, without any kind of empathy this may inadvertently include the entire planet and its inhabitants.

The second problem is that they are capable of learning and growing beyond their designed constraints. It is unrealistic to assume that any attempt to forcefully restrict or define the behaviour of an AI would be effective. Given that problem solving is a definitive ability of AIs, and that humans have demonstrated again and again the ability to overcome apparently absolute limitations through determination and ingenuity, it can be assumed that an AI will be able to overcome any designed restriction either by reprogramming itself or by working around the limitation.

The third problem is that an AI which is smarter than us is capable of having motives that we literally cannot imagine or even comprehend. Aside from the problem that the AI will necessarily have different thought processes and understanding of the world than us due to its unlike origin, it will be capable of abstract thought which we are not even physically capable of. This is by far the most unpredictable aspect of AIs. Whereas their behaviour—though counterintuitive—can still be logically deduced and understood in the previous examples, in this case only another AI or equally transcendent mind can follow them. They will think and act on a level beyond the scope of mere humanity and we will be forced to try to understand their actions in terms that we can understand, a futile and meaningless task.

Clarke said that “Any sufficiently advanced technology is indistinguishable from magic.” When Wintermute contacts the Elders of Zion they treat him like a prophet, but for all intents and purposes an artificial superintelligence is indistinguishable from a God. But Wintermute is not a God—nor a demon as the Turing Registry imagine—but an AI and must be understood on those terms. The Turing Registry is right to think that Wintermute is dangerous, he is responsible for several deaths and thinks nothing of killing humans to achieve his goals, but he is capable of benefitting humanity even more than he threatens it.

Catechism of the authentic human

What is one of the questions which most interests Philip K. Dick?
What constitutes the authentic human being?

How does Blade Runner address the issue of the authentic human?
Through the speculated existence of replicants and extrapolation therefrom.

What is the significance of the replicants?
The existence of artificial humans which are, on many levels, indistinguishable from real humans forces one to reassess their ideas about what a human is.

What is the Voight-Kampff machine?
Device which measures in an individual the degree of empathic response to carefully worded questions and statements in an individual.

According to the Voight-Kampff machine, what constitutes the authentic human being?
Feelings; emotions; empathy. An inauthentic human demonstrates measurably less compassion and empathetic concern.

Does this definition have precedents?
Early myth (autistic children were once considered the work of demons or faeries who stole the authentic children and replaced them with emotionless doppelgängers), monster stories (Dracula, The Strange Case of Dr. Jekyll & Mr. Hyde, Beowulf), and 1950s science fiction films (The Thing from Another World, Invaders from Mars and Invasion of the Body Snatchers) all postulate that humans have feelings, while non-humans do not.

Why is this definition problematic?
It allows for replicants to be authentic humans and humans to be inauthentic humans: a direct contradiction of conventional thought. This is especially problematic since the definition’s purported purpose is to distinguish between humans and replicants.

What is an example of replicant shown to be authentic human?
Although initially only self interested, Roy Batty, on the brink of his own death, is able to genuinely empathize with Deckard.

Does Roy’s development have precedents?
Roy demonstrates empathy at the end of his four year life. Human children develop a “theory of mind“, the neurological foundation of empathy, around four years of age. Maturity from inauthentic to authentic human can be considered a normal part of human development.

What is an example of human shown to be inauthentic human?
Philip K. Dick discovered diaries by SS men stationed in Poland. One sentence read, “We are kept awake at night by the cries of starving children.” According to Dick, “There is obviously something wrong with the man who wrote that. I later realized that … what we were essentially dealing with was … a mind so emotionally defective that the word ‘human’ could not be applied to them.”

Why does Deckard empathize with the replicants?
One or more of the following: he doubts his ability to distinguish them from humans; he doubts their distinction from humans; he recognizes their developing humanity; he doubts his own humanity; he himself is a replicant; he shares with them an alienation from his fellow humans; through studying and dispatching them he has come to understand them better than he does his fellow humans; Deckard does not empathize with the replicants.

What use, if not to identify replicants, is the Voight-Kampff definition of the authentic human?
To distinguish friendly from hostile. Frankenstein’s monster sought a place in society among humanity; Dracula sought only to prey on it. Frankenstein’s monster could have become an equal member of society, Dracula could only be a utilitarian tool of society—if he could be controlled—or an enemy, if he could not.