In my previous post, I made a distinction between “Strong” AI and “Weak” AI and then went on to describe how Weak AI works. I explained that Large Language Models such as ChatGPT convert words to numbers and then calculate the statistically most probable words and sentences that will follow any given words and sentences. This set of numerical values is then translated back into text and displayed on your screen. As a result, words (and language) don’t exist for computers. Computers can’t read in any sense comparable to human beings. They are always functioning like very big and fast calculators.
This post will be about Strong AI, or about the possibility of a machine intelligence gaining human-like consciousness, becoming self aware, and then acting out on its own autonomous will. I’ve been thinking about this possibility since the first Matrix film came out at the end of March 1999. Some time in 1999 I published my first short essay about The Matrix, and then that fall, I started graduate school. My second semester, Spring 2000, I attended Cassandra Laity’s literary theory class and read Baudrillard, whose Simulacra and Simulation was required reading for cast members in The Matrix. I wrote my final paper for this class on Baudrillard and the first Matrix film, an essay I revised a number of times after each new installment in the series appeared. My final version of that essay was published in the International Journal of Baudrillard Studies in July of 2005 (after getting shot down by PMLA).
Besides my own interest in the film — and I’m embarrassed to say how many times my family and I saw The Matrix at the Oveido Mall when it first came out (more than 5, less than 10) — once I realized The Matrix was a retelling of Mary Shelley’s Frankenstein, I began to see these retellings everywhere. Metropolis. A.I. Bicentennial Man. Stealth (Frankenplane!). I, Robot. And then later, Ex Machina, which intrigued me enough to write about it for Sequart. Ex Machina drew me in because it explored the intersections between the Frankenstein story and sexual politics, which was an important part of Shelley’s novel but missing from all other retellings that I can recall offhand. Regardless of specific thematic content, these stories all ended unhappily, either for man or machine or, often, for both, and I wanted to know why. Why do we consistently imagine that if humanity were to create an artificial consciousness like ourselves that the end of this story would be tragic?
In my dissertation, I called this fear of the possibility of our own creations “creation anxiety,” and I eventually revised my dissertation into my first book, Blake and Kierkegaard: Creation and Anxiety. My intention was to cover stories reflecting creation anxiety from British poet and printmaker William Blake’s The [First] Book of Urizen in the 1790s through Shelley’s Frankenstein, R.U.R., Metropolis, and The Matrix, but I never made it past William Blake. Blake’s Urizen books seem to me to be seminal works in English literature on the failure of a subcreation, so I used Blake’s mythology as the basis of my examination of creation and Kierkegaard’s The Concept of Anxiety to provide a relevant, applicable, and complex concept of anxiety.
I quickly realized that the possibility of actually creating an artificial consciousness wasn’t the real focus of my study. My study focused instead on human reactions to that possibility, or our own consistently expressed anxieties about the possibility of human beings creating an artificial consciousness. But I would like to look through the telescope the other way this time and focus on the possibility of an artificial sentience itself, especially how it is imagined in different works of fiction, mostly drawing from film and television. And I would like to start with the caveat that I don’t understand how this technology works. Not because I’m technologically illiterate, but because it doesn’t exist and has never existed, so no one knows. Not only that, we have such a hard time defining human consciousness that it’s hard to explain what that would look like in artificial form.
Many of those who speak authoritatively about Strong AI, or about a machine attaining consciousness, are quite often confidently lying to you about their knowledge and probably also lying to themselves. They might think they know, but they don’t, or they know they don’t know and are choosing to lie about it just to position themselves as some kind of “thought leader” (euphemism for “salesman,” “con artist,” or “liar”). The fact is we haven’t seen Strong AI yet, so we can’t know what form it might take, and if somehow it ever were to happen, we probably wouldn’t recognize it. When a CEO of a tech firm recently said in an interview that current AI isn’t capable of consciousness, he added that it would be like an alien intelligence if it appeared. He only meant that it would be unrecognizable to human beings as a consciousness, but some people are ludicrously using AI as an acronym for “alien intelligence.” My article on Ex Machina emphasized this state of not-knowing: in the film, the CEO who finally created a Strong AI did so through a series of artificial creations stylized as human women, which only emphasized the lack of real women in that man’s life. My point was that this man doesn’t understand human women well enough to have a long term relationship with just one, so how can he understand an artificial consciousness?
We can know, however, what form we imagine an artificial consciousness might take, and I think our imaginations teach us quite a bit about human consciousness if nothing else, which is the first step in understanding what a machine consciousness might look like and, ultimately, why it will very likely never exist. In my overview of creation anxiety stories, I’ve found that almost all of them follow either gnostic or organic paradigms. If “brain” is the gray matter in our heads and “mind” is our conscious self-awareness, the gnostic paradigm imagines that brain is equivalent to mind and that the only mind is the brain. Since in this model the brain is an organic electrical device, a wet CPU of some kind, if we could duplicate the electrical patterns in the brain, we could duplicate human consciousness, and similarly, if a computer’s electrical patterns started resembling that of a human brain, it too would attain consciousness. The TV series Black Mirror is the clearest advocate for the gnostic paradigm that I’ve encountered so far. In several episodes, people’s brain patterns are literally copied from a living person into artificial environments, some of them small enough to hang on a keychain, and they sometimes even exist within another person’s head, alongside the original consciousness. In each case, the newly created consciousness is a literal duplicate of the original’s consciousness, although from that point it begins to have its own experiences and create its own memories.
Most creation anxiety stories are working with a version of this gnostic paradigm. The Matrix, A.I., Metropolis, Black Mirror, Stealth, The Terminator, and many other films work from the premise that a purely mechanical-electrical device somehow attains consciousness. But a few work from an organic paradigm. Bicentennial Man does and perhaps has the most benign, though sad, ending of them all. In this film, Robin Williams plays a robot named Andrew who seeks to be recognized by the human world as a person, not just a machine. The film makes many of the same narrative moves as any other film — humans resist, are close minded, there’s some hostility — but catastrophe is averted because Andrew never quits trying to gain acceptance by humanity as a person (Frankenstein’s Creature, on the other hand, did quit and started murdering people). Andrew begins with a metallic body which is gradually rebuilt into an artificially developed organic body. In the end, Andrew realizes he must accept mortality in order to be fully human, so he takes the last step — a full transfusion of human blood — which will allow him to age and then die. The human race finally recognizes him as a person on the day that he dies. Immediately after he dies, in fact, so that he lived a fully human life, one characterized by a lifetime pursuit of an unfulfilled goal.
Bicentennial Man is an example of the organic model for artificial consciousness because it recognizes the importance of the body to sentience. It doesn’t really explore why the body is important to sentience, though — just why it’s important to human acceptance and recognition — because Andrew wanted that recognition before he had anything even resembling a human body. But there’s another, similar character arc that’s much more suggestive of how and why the human body is important to sentience: the character Data from the television series Star Trek: The Next Generation and related films. Data, like Andrew, is an advanced robotic being with a “positronic brain,” an idea developed by Isaac Asimov for I, Robot and reused in Bicentennial Man (another Asimov brain child, so to speak), Star Trek, and a number of other films and television series, including some that weren’t written by Asimov, like Star Trek. Data, initially, lives without emotion, but somehow has great “curiosity” about human beings and their emotional lives. Early in the series he makes it his goal to understand more and more about human experience so that he can become more human.
Data’s development throughout the television series and subsequent films takes a number of dramatic turns. First, he installs an “emotion chip,” a dormant feature in his construction that, once activated, would allow him to experience human emotions. Until that point he was very much like Mr. Spock from the original series: more like a walking logic machine than a person. Data’s character is then used within the series to comment on the experience and development of human emotion with alternating humor and pathos — a great development for the show. The next turn occurs later in Data’s history, when the Enterprise is captured by the Borg and Data is assimilated. Since Data is already an entirely artificial being, in order to transform him into a cybernetic organism (cyborg or “Borg”), they have to graft human skin onto him, covering part of his head and arm. That, combined with Data’s emotion chip, was his most radical step in his transformation from machine to person. The Borg were ultimately defeated when Data released a chemical agent that consumed human flesh, which meant he had to sacrifice his own newly acquired skin. But he said afterwards that he had never felt so close to being human as when he had skin.
I think at this point Star Trek has provided its most important insight into human consciousness and placed the difference between gnostic and organic models of consciousness in stark relief. There’s more to consciousness than the human brain. It’s not just a series of electrical patterns in a meat CPU. Consciousness is a function of the entire body, as the brain is extended throughout the body via the nervous system, so that consciousness is not isolated in the head. Through the body and its sense organs, consciousness exists in a continual state of intercourse with its external environment: we are never not hearing and feeling, at the least, and probably never not smelling either. For this reason, I believe our organic nature is essential to our consciousness. Because we’re continually aware of our immersion in an external world, so much so that our external worlds are inextricably and inescapably a part of our internal environments, we can eventually become aware of our difference from the external world. At that point, we become persons.
But this development only occurs because of our organic bodies. Light and temperature sensors aren’t skin and eyes. Sensors provide information, but they don’t connect the device to its environment in an inescapably self-defining way. Because of that, no machine will ever gain sentience.
I know you’re thinking, “But at some point in the future, why couldn’t….?”
No. A bigger and faster calculator will never be more than a calculator, no matter how fast it works.
And now you might think I’m being close minded, like the evil or at least unlikeable characters in Star Trek and other films. I suspect you want to be open to the possibility of the gnostic model being true because of your ignorance, not your knowledge. The human brain is a big black box, and so are computers, so you think anything in one big black box could just as well be in another.
Now just go back and read through this essay again and ask yourself if all big black boxes are really alike.
Why isn’t the organic good enough? I believe an ancient text sheds some light on this question:
Their idols are silver and gold, the work of men’s hands.
Psalm 115: 4-8
They have mouths, but they speak not: eyes have they, but they see not:
They have ears, but they hear not: noses have they, but they smell not:
They have hands, but they handle not: feet have they, but they walk not: neither speak they through their throat
They that make them are like unto them; so is every one that trusteth in them.
This post is part three of a three part series. You can read Parts I and II here as well.
2 thoughts on “AI and Talking Heads, Part III: Why Sentient Machines Will Never Exist”