AI in the Classroom

I’ve long been an early technology adapter. I started building webpages/websites in the 90s, used turnitin.com not just for plagiarism checking but for providing rich feedback on student papers in the very early 2000s, and around that time had students write live blogs on a course website as part of their writing requirements. I didn’t catch up to the iPhone until the iPhone 3, but I owned a first generation iPad that I used for years, as well as a first gen Apple Watch that I used until last year. Additionally, I’ve been teaching online courses and building online courses and curriculum since January 2008 as part of my regular course load. I’m not particularly excited about technology, just curious. I like to see what it can do, what results it can produce, and I’ve been doing so for almost my entire 23 year college teaching career.

I’ve witnessed a number of very naive attitudes toward technology in the classroom during this time as well: institutional requirements to use it with no clear sense of purpose or pedagogical goals attached; college presidents threatening to fire faculty who didn’t use technology in the classroom; instructors so desperate for recognition they try to position themselves as innovators because they use Prezis, administrators so similarly desperate they buy it because some salesman said the words “student centered” and “innovative” in the same sentence, and on and on. Overall, institutional attitudes toward the use of technology in learning have been more often uncritical, unfocused, and directionless than not: people think they must use it, but they have no clear sense of why, either because they don’t understand the technology, or how people interact with it, or even how teaching works. Most often, they don’t understand all three.

I would like to say I’m no Luddite. At least not yet. I do not at present fear or hate technology. And I know of people who have benefitted from different kinds of educational technology immensely. One woman with dyslexia comes to mind, who once shared with me that specific computer programs helped her manage her dyslexia so she could make it through her Ph.D. program. Additionally, some forms of technology are ubiquitous in the workplace, so students need to develop fluency with them while in college: Microsoft Office, for example, or the Adobe Creative Suite, or AutoCAD. Some programs of study are focused on developing proficiency with the technology itself, such as Radiology programs, and some uses of technology are eminently practical, like keeping student grade books in a learning management system of some kind so that students can see their grades at any given time just by logging in. So I’m not in any way advocating for the elimination of technology from education as some kind of ideal. That’s not only unrealistic, but it’s undesirable.

I’d like to consider other uses of technology, though: uses that aren’t absolutely necessary for the field or the workplace but are supposed to provide some kind of pedagogical benefit. That’s an entirely different use of technology in the classroom. Many of the previously mentioned programs or tech were inherent to the field: proficiency with the technology itself, in those cases, is among the instructional goals, which is why we take the time to teach the technology itself. But this other kind of use involves teaching something else with the technology rather than teaching the technology itself. It’s using technology to teach another subject that’s not at all dependent upon the technology.

In those cases, we need to seriously consider our use of the technology, because the technology is always a barrier between the student and course goals: the student must move through the technology to achieve course goals even though the technology isn’t inherently necessary to the course goals. You can of course respond by saying, “Then make them inherent to the course goals!”, but that’s missing the point. If we don’t have to, why should we? Can you answer that question in any detail beyond trivial generalities about technology in the classroom?

Let’s get specific: I have students use Microsoft Word all semester long in my writing and literature classes. Word is so ubiquitous in the workplace that I don’t mind doing so, and I take the time to provide some instruction in Word to teach students how to format documents different ways. But using Word has nothing to do with the course goals of a writing or literature course. I could spend the rest of my teaching career using printed books, pencil, and paper in my classroom and not feel like I’m sacrificing my pedagogical goals for the course: the course is really about developments in cognition brought about by intensive reading of difficult, creative texts and students grappling with expressing their own ideas about them. And in between all of that, in every literature class, students are ultimately interpreting a person of some kind: a fictional person, usually, but still a person, and what field does not require us to interpret people almost all day long?

I teach my students that writing is a skill, and as such, you only develop it with practice. I tell my students that I can’t teach them how to write by talking to them while they passively listen. I do indeed lecture about writing, but the lecture by itself isn’t enough. My sixth grade baseball coach taught his team how to swing a bat with a video, but he knew the video by itself didn’t teach us how to swing a bat. The lecture and the video were the beginning of the instructional process, so I didn’t really learn how to hit a baseball until I practiced it, especially with my coach giving me corrections at first. I had to do it to learn it, just like people who learn to play a musical instrument spend hours practicing scales. Writing is a skill like that: you learn it by doing it.

But, in the end, I use Word in the classroom because it’s a useful tool and many students in many fields will need to use Word somehow in their future careers, even if only to write a résumé and cover letter. But what about other uses of technology? Do you really need it? Will students spend more time trying to master the technology than master the course material? If they do, what’s the payoff for what you’re sacrificing? How often do we even stop to ask these questions, much less answer them?

And now we come to AI. If you’ve been following discussion of ChatGPT since its release last year, there’s been quite a bit of hysteria and mystification about this program across social media: it’s either going to be the end of teaching as we know it or will revolutionize teaching forever; it will be the end of humanity or transform humanity forever; it’s an alien intelligence; it’s the singularity. It’s none of these things and will do none of these things: ChatGPT is in fact a big, fast calculator for which words in any human sense do not and cannot exist.

It is, however, a very impressive calculator and can indeed do quite a bit very quickly, so it’s a potentially useful tool and, like all other potentially useful tools, a potentially dangerous one. Not that it will attain consciousness and turn on us, but that we might rely on it in ways we shouldn’t with unexpected, undesirable results. The immediate “danger” to writing instruction is that students will use it to plagiarize. While ChatGPT is subject to about 500-800 word limits on its coherence, a number of consecutive prompts can be used to generate most student length papers that are, of course, easily detectable by a number of services or even just an attentive instructor: ChatGPT writes in an easily identifiable voice.

A more careful student, “Owen Kichizo Terry,” which I assume is a pseudonym for a real undergraduate student at Columbia University (but maybe not), in “I’m a Student. You Have No Idea How Much We’re Using ChatGPT” describes a less detectable (or undetectable) use of ChatGPT: the student provides prompts for outlines and then writes an essay following the outline. It’s unclear to me how much work the student is really saving short of having to come up with an idea of his own. He still has to write the entire paper. Near the end of his essay, he claims he sees a number of students doing the same thing, saying

At any given time, I can look around my classroom and find multiple people doing homework with the help of ChatGPT. We’re not being forced to think anymore.

People worry that ChatGPT might “eventually” start rendering major institutions obsolete. It seems to me that it already has.

https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt

There are of course a number of errors in the student’s thinking, but I’d like to say first that he’s a student. If he’s a first year undergraduate, he’s just about old enough to be my grandson. So I’m inclined to give this student a pass; not on cheating, but on having a bunch of wrong, frankly idiotic ideas. We all do when we’re 18. That’s fine.

His first wrong idea is that he doesn’t have to do his own thinking anymore. Filling out the outline with his own version of that content requires him to think. He’s exempt from developing a thesis and supporting ideas in the form of bullet points, but he’s not exempt from thinking, as developing a fully written paper even from a preexisting thesis will inevitably require his own thinking.

It’s also prima facie ridiculous to think that because he can cheat his way through first year writing that Columbia University is now obsolete. He’s being dramatic, of course: he has no idea how rigorous the student learning becomes further up the food chain, or the important research being carried out that he will never see.

Next, he’s mistaken (but probably not alone in this) in thinking that his is a novel form of plagiarism made possible by ChatGPT. You don’t need a computer to commit this kind of plagiarism, just a library. Find an obscure book that hasn’t been checked out in twenty years, outline part of its argument, and then write a paper based on that outline. Preventing this kind of plagiarism is one of the reasons why we have qualified, well-read faculty: at one time we believed there shouldn’t be a paper or book out there that the faculty member hasn’t read if it’s in his or her field, so sharp faculty members would recognize these ideas from their previous reading and catch the student. But you know what the student still has to do with this kind of plagiarism? Read a book, understand it, and then write a paper. Of course an AI generated thesis may not be identifiable from prior reading (or it may), but it’s still essentially the same form of plagiarism, and I also have to wonder how often the AI will repeat itself, and why it shouldn’t.

The student is also mistaken in thinking he’s representative of many students in the country. I’ve spoken to faculty in other fields who are beginning to incorporate ChatGPT in their instruction, and they’re reporting that students seem afraid of the technology. Ivy League students and instructors tend to presume that they represent students across the country: what blessed ignorance. Teach at a community college for a year and get back with me. I don’t think at present use of ChatGPT to cheat in quite this way is widespread. It’s probably more ubiquitous among those who feel privileged, entitled, and under a great deal of pressure to perform at a high level, all of which characterize students at higher level institutions more than at lower level institutions. All students feel pressure to perform at some time: students that I’ve caught plagiarizing often did so for this reason, but the ubiquity of cheating varies greatly by institution.

The student does make some good suggestions for defeating this kind of plagiarism:

If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.

https://www.chronicle.com/article/im-a-student-you-have-no-idea-how-much-were-using-chatgpt

But he’s mistaken in thinking that we haven’t all already thought about it, or that we aren’t already doing it. Oral exams and in-class writing are already widely used and have been for years. Decades. Literally, centuries. At the doctoral level, these assessments (in the form of qualifying exams and then the dissertation) are often used to gauge the student’s knowledge, to ensure that the student possesses this knowledge him or herself. Can we use them more often? Some instructors certainly could. I certainly could.

The student, being a kid, seems oblivious to the fact that no one will feel inclined to respect his opinions once he’s admitted that he plagiarizes his papers regularly, but he does seem concerned that we do something about it, which is commendable. But, why doesn’t he? What kind of entitlement compels him to cheat just because he can get away with it? Does it gratify him to feel smarter than his teachers by defeating the prompts and breaking the rules? This is all very childish thinking, but then again, we’re dealing with a child — but one, I should say, who already writes very well. He is sadly the ignorant beneficiary of an educational system that has left him, right out of high school, with skills more highly developed than most students in the country. But still, his highly qualified and accomplished college teachers do not need his advice. There’s probably very little that he’s said that they haven’t already considered.

The real tragedy of plagiarism remains unsaid: if writing, reading, and thinking are skills that are only developed through practice, plagiarism is an act by which students rob themselves of the benefit of their education: the knowledge and skills gained, the cognitive development. Students are spending thousands of dollars — tens of thousands of dollars — to deprive themselves of their own education, and in the end they will pay for that loss themselves.

He’s not cheating his teachers, his school, or his parents, just himself, and that is the one thing that he, and all other students, need to know about plagiarism, and something that everyone needs to consider before incorporating any kind of technology in the classroom. Absolutely teach the tech itself if your field demands it. But don’t teach using the tech until you’ve asked some difficult questions first.

AI and Talking Heads, Part III: Why Sentient Machines Will Never Exist

In my previous post, I made a distinction between “Strong” AI and “Weak” AI and then went on to describe how Weak AI works. I explained that Large Language Models such as ChatGPT convert words to numbers and then calculate the statistically most probable words and sentences that will follow any given words and sentences. This set of numerical values is then translated back into text and displayed on your screen. As a result, words (and language) don’t exist for computers. Computers can’t read in any sense comparable to human beings. They are always functioning like very big and fast calculators.

This post will be about Strong AI, or about the possibility of a machine intelligence gaining human-like consciousness, becoming self aware, and then acting out on its own autonomous will. I’ve been thinking about this possibility since the first Matrix film came out at the end of March 1999. Some time in 1999 I published my first short essay about The Matrix, and then that fall, I started graduate school. My second semester, Spring 2000, I attended Cassandra Laity’s literary theory class and read Baudrillard, whose Simulacra and Simulation was required reading for cast members in The Matrix. I wrote my final paper for this class on Baudrillard and the first Matrix film, an essay I revised a number of times after each new installment in the series appeared. My final version of that essay was published in the International Journal of Baudrillard Studies in July of 2005 (after getting shot down by PMLA).

Besides my own interest in the film — and I’m embarrassed to say how many times my family and I saw The Matrix at the Oveido Mall when it first came out (more than 5, less than 10) — once I realized The Matrix was a retelling of Mary Shelley’s Frankenstein, I began to see these retellings everywhere. MetropolisA.IBicentennial ManStealth (Frankenplane!). I, Robot. And then later, Ex Machina, which intrigued me enough to write about it for SequartEx Machina drew me in because it explored the intersections between the Frankenstein story and sexual politics, which was an important part of Shelley’s novel but missing from all other retellings that I can recall offhand. Regardless of specific thematic content, these stories all ended unhappily, either for man or machine or, often, for both, and I wanted to know why. Why do we consistently imagine that if humanity were to create an artificial consciousness like ourselves that the end of this story would be tragic?

In my dissertation, I called this fear of the possibility of our own creations “creation anxiety,” and I eventually revised my dissertation into my first book, Blake and Kierkegaard: Creation and Anxiety. My intention was to cover stories reflecting creation anxiety from British poet and printmaker William Blake’s The [First] Book of Urizen in the 1790s through Shelley’s FrankensteinR.U.R.Metropolis, and The Matrix, but I never made it past William Blake. Blake’s Urizen books seem to me to be seminal works in English literature on the failure of a subcreation, so I used Blake’s mythology as the basis of my examination of creation and Kierkegaard’s The Concept of Anxiety to provide a relevant, applicable, and complex concept of anxiety.

I quickly realized that the possibility of actually creating an artificial consciousness wasn’t the real focus of my study. My study focused instead on human reactions to that possibility, or our own consistently expressed anxieties about the possibility of human beings creating an artificial consciousness. But I would like to look through the telescope the other way this time and focus on the possibility of an artificial sentience itself, especially how it is imagined in different works of fiction, mostly drawing from film and television. And I would like to start with the caveat that I don’t understand how this technology works. Not because I’m technologically illiterate, but because it doesn’t exist and has never existed, so no one knows. Not only that, we have such a hard time defining human consciousness that it’s hard to explain what that would look like in artificial form.

Many of those who speak authoritatively about Strong AI, or about a machine attaining consciousness, are quite often confidently lying to you about their knowledge and probably also lying to themselves. They might think they know, but they don’t, or they know they don’t know and are choosing to lie about it just to position themselves as some kind of “thought leader” (euphemism for “salesman,” “con artist,” or “liar”). The fact is we haven’t seen Strong AI yet, so we can’t know what form it might take, and if somehow it ever were to happen, we probably wouldn’t recognize it. When a CEO of a tech firm recently said in an interview that current AI isn’t capable of consciousness, he added that it would be like an alien intelligence if it appeared. He only meant that it would be unrecognizable to human beings as a consciousness, but some people are ludicrously using AI as an acronym for “alien intelligence.” My article on Ex Machina emphasized this state of not-knowing: in the film, the CEO who finally created a Strong AI did so through a series of artificial creations stylized as human women, which only emphasized the lack of real women in that man’s life. My point was that this man doesn’t understand human women well enough to have a long term relationship with just one, so how can he understand an artificial consciousness?

We can know, however, what form we imagine an artificial consciousness might take, and I think our imaginations teach us quite a bit about human consciousness if nothing else, which is the first step in understanding what a machine consciousness might look like and, ultimately, why it will very likely never exist. In my overview of creation anxiety stories, I’ve found that almost all of them follow either gnostic or organic paradigms. If “brain” is the gray matter in our heads and “mind” is our conscious self-awareness, the gnostic paradigm imagines that brain is equivalent to mind and that the only mind is the brain. Since in this model the brain is an organic electrical device, a wet CPU of some kind, if we could duplicate the electrical patterns in the brain, we could duplicate human consciousness, and similarly, if a computer’s electrical patterns started resembling that of a human brain, it too would attain consciousness. The TV series Black Mirror is the clearest advocate for the gnostic paradigm that I’ve encountered so far. In several episodes, people’s brain patterns are literally copied from a living person into artificial environments, some of them small enough to hang on a keychain, and they sometimes even exist within another person’s head, alongside the original consciousness. In each case, the newly created consciousness is a literal duplicate of the original’s consciousness, although from that point it begins to have its own experiences and create its own memories.

Most creation anxiety stories are working with a version of this gnostic paradigm. The MatrixA.I.MetropolisBlack MirrorStealthThe Terminator, and many other films work from the premise that a purely mechanical-electrical device somehow attains consciousness. But a few work from an organic paradigm. Bicentennial Man does and perhaps has the most benign, though sad, ending of them all. In this film, Robin Williams plays a robot named Andrew who seeks to be recognized by the human world as a person, not just a machine. The film makes many of the same narrative moves as any other film — humans resist, are close minded, there’s some hostility — but catastrophe is averted because Andrew never quits trying to gain acceptance by humanity as a person (Frankenstein’s Creature, on the other hand, did quit and started murdering people). Andrew begins with a metallic body which is gradually rebuilt into an artificially developed organic body. In the end, Andrew realizes he must accept mortality in order to be fully human, so he takes the last step — a full transfusion of human blood — which will allow him to age and then die. The human race finally recognizes him as a person on the day that he dies. Immediately after he dies, in fact, so that he lived a fully human life, one characterized by a lifetime pursuit of an unfulfilled goal.

Bicentennial Man is an example of the organic model for artificial consciousness because it recognizes the importance of the body to sentience. It doesn’t really explore why the body is important to sentience, though — just why it’s important to human acceptance and recognition — because Andrew wanted that recognition before he had anything even resembling a human body. But there’s another, similar character arc that’s much more suggestive of how and why the human body is important to sentience: the character Data from the television series Star Trek: The Next Generation and related films. Data, like Andrew, is an advanced robotic being with a “positronic brain,” an idea developed by Isaac Asimov for I, Robot and reused in Bicentennial Man (another Asimov brain child, so to speak), Star Trek, and a number of other films and television series, including some that weren’t written by Asimov, like Star Trek. Data, initially, lives without emotion, but somehow has great “curiosity” about human beings and their emotional lives. Early in the series he makes it his goal to understand more and more about human experience so that he can become more human.

Data’s development throughout the television series and subsequent films takes a number of dramatic turns. First, he installs an “emotion chip,” a dormant feature in his construction that, once activated, would allow him to experience human emotions. Until that point he was very much like Mr. Spock from the original series: more like a walking logic machine than a person. Data’s character is then used within the series to comment on the experience and development of human emotion with alternating humor and pathos — a great development for the show. The next turn occurs later in Data’s history, when the Enterprise is captured by the Borg and Data is assimilated. Since Data is already an entirely artificial being, in order to transform him into a cybernetic organism (cyborg or “Borg”), they have to graft human skin onto him, covering part of his head and arm. That, combined with Data’s emotion chip, was his most radical step in his transformation from machine to person. The Borg were ultimately defeated when Data released a chemical agent that consumed human flesh, which meant he had to sacrifice his own newly acquired skin. But he said afterwards that he had never felt so close to being human as when he had skin.

I think at this point Star Trek has provided its most important insight into human consciousness and placed the difference between gnostic and organic models of consciousness in stark relief. There’s more to consciousness than the human brain. It’s not just a series of electrical patterns in a meat CPU. Consciousness is a function of the entire body, as the brain is extended throughout the body via the nervous system, so that consciousness is not isolated in the head. Through the body and its sense organs, consciousness exists in a continual state of intercourse with its external environment: we are never not hearing and feeling, at the least, and probably never not smelling either. For this reason, I believe our organic nature is essential to our consciousness. Because we’re continually aware of our immersion in an external world, so much so that our external worlds are inextricably and inescapably a part of our internal environments, we can eventually become aware of our difference from the external world. At that point, we become persons.

But this development only occurs because of our organic bodies. Light and temperature sensors aren’t skin and eyes. Sensors provide information, but they don’t connect the device to its environment in an inescapably self-defining way. Because of that, no machine will ever gain sentience.

I know you’re thinking, “But at some point in the future, why couldn’t….?”

No. A bigger and faster calculator will never be more than a calculator, no matter how fast it works.

And now you might think I’m being close minded, like the evil or at least unlikeable characters in Star Trek and other films. I suspect you want to be open to the possibility of the gnostic model being true because of your ignorance, not your knowledge. The human brain is a big black box, and so are computers, so you think anything in one big black box could just as well be in another. 

Now just go back and read through this essay again and ask yourself if all big black boxes are really alike.

Why isn’t the organic good enough? I believe an ancient text sheds some light on this question:

Their idols are silver and gold, the work of men’s hands.
They have mouths, but they speak not: eyes have they, but they see not:
They have ears, but they hear not: noses have they, but they smell not:
They have hands, but they handle not: feet have they, but they walk not: neither speak they through their throat
They that make them are like unto them; so is every one that trusteth in them.

Psalm 115: 4-8

This post is part three of a three part series. You can read Parts I and II here as well.

AI and Talking Heads, Part II: Why Machines Can’t Read

I’d like to talk about a distinction between two kinds of artificial intelligence (AI): “Strong” AI and “Weak” AI. StrongAI is AI that has attained artificial consciousness — the machine has become sentient and thinks for itself in a way comparable to a human being. Strong AI has been the subject of numerous films and works of fiction since Czech writer Karel Čapek’s 1920 play R.U.R. That play coined the word “robot,” derived from the Czech word “robota,” which means “meaningless work or drudgery.” Strong AI will be the subject of my next post. I’m going to write about Weak AI here.

Weak AI is artificial intelligence that produces an output that resembles something a human being would produce, but it’s one that of course does not have any kind of human consciousness. So you can multiply 6 x 7 in your head and produce an answer (“56,” right?), and so can a computer (but it’d probably say “42”). There’s no question that we’re dealing with an inert object in the case of Weak AI no matter how much processing power is behind it or how complex the tasks that it can perform. It’s just a machine running a program. And the most important thing about AI text generators such as ChatGPT is that the machine is literally incapable of reading.

I’d like to preface my remaining comments by saying that I don’t understand how Large Language Models such as ChatGPT work. I can code in HTML and XML, and I understand some of the basics of computer hardware and software, but I couldn’t write or debug the code for any Large Language Model, so I prefer to say that I don’t understand how the technology works. I think more people writing about ChatGPT need to preface their posts with that caveat. What I will do is summarize what other people with that kind of competence have said about how Large Language Models work, some of which I’m taking from the Digital Humanist listserv, a Google Group dedicated to Digital Humanities, which is the use of computer technology in humanities scholarship. Most of what I’m going to say, however, is widely known. I should also add that what I say below is a very simplified account of technology that has developed in complicated ways, but I believe it still presents an accurate general picture.

Almost all computers work by processing bits, which are basic units of information consisting represented by a 1 and a 0. 1 and 0 signify on and off positions within the computer’s hardware that are being used to function as true/false or yes/no states. Moving up the chain, a byte is generally made up of 8 bits, and it was originally defined as the number of bits needed to represent a single character of text (say, a letter or number) in a computer. So a single letter is made up of a series of eight 1s and 0s, and words are made up of strings of these 1s and 0s put together. You can see the strings in the “bin” column on this ASCII-II table: capital A, for example, is 01000001. The word “dog” would be 011001000110111101100111.

Now you can imagine the kind of processing power needed if a computer were to analyze millions of words character by character, especially including every punctuation mark and space. To simplify the process and reduce the processing power needed, computer programs that analyze text are trained to assign numerical values to “meaning units” (such as -ing and -ly endings in English) called n-grams. So “the cat sat on the mat” might have n-grams associated with “the cat,” “cat sat,” “sat on,” “on the,” “the mat,” etc. Large Language Models are trained to recognize n-grams in a very large collection of text (say, all of Wikipedia), associating each n-gram with a numerical value, and then calculating the statistical probability of the next word (or n-gram) based on the previous ones before it. What’s really advanced about ChatGPT and other Large Language Models is that they calculate the statistical probability not just of the next word, but of the next sentence or sentences based on its training. It’s a massively powerful program.

Now take a step back and look at the entire process: words or bits of words or small groups of words are turned into numbers, and numbers are turned into sequences of 1s and 0s, and the rest is calculation.

At what point does any of this resemble reading in the normal human sense, where words are associated with physical things in the world, or with sensations, emotions, or concepts? Or memory, or a combination of all of these? Actually, nowhere. ChatGPT is a big calculator trained to convert words to numbers, run a statistical calculation and the most probable numbers coming up next, and then spit out numbers that are then translated into text output. At no point does it have any concept of meaning in any human sense of the word. As a powerful iteration of Weak AI, though, it does a great job resembling a human speaker.

And that is why computers can’t read. There is literally no understanding of the text in a human sense because words as words, as language, doesn’t exist for computers.

This post is part two of a three part series. You can read Parts I and III here as well.

AI and Talking Heads

Handwringing, panic, and palm sweating has surrounded public discussion of ChatGPT since OpenAI announced its release in November 2022. Since then, “thought leaders” have been excitedly making declarations about this new technology, claiming that it’s inaugurating the end of humanity, that it’ll end education or completely transform it, that it is a kind of consciousness or that it has attained independent thought, and most recently, that it’s more like an alien intelligence than a human intelligence (never mind that we’ve never met an alien intelligence, so how can we compare?).

I’ve been studying the possibility of human creations attaining independent will and consciousness — becoming intelligent in a human sense and what that would mean — since about 1999. I first published on the topic around 2001 and published a book-length study of the topic in 2008. I’ll go into the history of that study later, but I’d like to preface my comments here with a very important disclaimer:

I don’t know how this technology works.

Before you rush in with an Oh but I do, let’s think about what understanding this technology really means:

  • Can you write any of the code that makes it work?
  • Can you read the code (different chunks of it, anyway) and understand what it’s set up to do without being told in advance?
  • If it broke, could you fix it?
  • Do you even know much of anything about how computers work between the keyboard and the screen? Even if you can answer “yes” to this question, do you really know anything about programming large language models?

I can read descriptions of how this technology works as well as anyone. I understand them. I can repeat them back to you in my own words. That’s different from understanding how this technology works. I have some proficiency writing .html (but so what? One of my sons developed that proficiency in middle school) and .xml. So before you read commentary on ChatGPT, ask yourself if the person writing it actually knows anything about this technology, or if they’re only trying to sound like they know what they’re talking about. Remember that we’re in a media environment that rewards attention-getting headlines, and thought leaders only become thought leaders by generating these headlines.

Imagine how different our media environment would look if everyone had to preface what they wrote with an honest declaration of their knowledge of the subject. How many articles and editorials about ChatGPT would start with the phrase, “I don’t know how this technology works”? I’m asking because I think most of them should.

I’m going to be writing more about this topic later. Over my own twenty years of study of this topic, I’ve learned that my real focus of attention is not on the possibility of machines attaining consciousness in something like a human sense, but on what people say about technology and how they react to it, both individually and socially. I will be writing what I know about, in other words.

But I don’t know how this technology works.

This post is part one of a three part series. You can read Parts II and III here as well.

High Fidelity, Then and Now

High Fidelity then starred John Cusack as an immature, self-absorbed record store owner who, because of a recent, painful breakup with his girlfriend Laura (perf. Iben Hjejle), went on a tour of self-discovery through conversations with all of his ex girlfriends. He handles his breakup, which is the opening scene of the film, with the grace and maturity of a five year old who just broke his favorite toy, and that metaphor becomes more and more apt the more time viewers spend with his character.

I won’t say that High Fidelity then, in 2000, was a perfectly paced or acted film. But it was well put together, taking the form at times of a video diary through fourth wall breaking scenes. The confessional character of these scenes generates empathy for Cusack’s character, Rob Gordon, as does his interactions with his two employees (including Jack Black), whom he hired part time but who then showed up every day, all day long, just because they like hanging out at the record store. The film also encourages a kind of surrogate empathy through Rob’s interaction with a long time friend, Liz, performed by John Cusack’s real life sister, Joan Cusack. Through the first third of the film, we might think Rob’s character is a bit childish and narcissistic, but we still empathize with his seemingly genuine pain and his attempts to understand himself.

These strategies turn out to be a brilliant setup for increasingly horrifying revelations about the things that Rob has done to the women in his life. Rob rages about Laura’s breakup, moving back and forth between his desire for her and wanting to break it off, but we only find out why she broke up with him about halfway through the film: he admitted to cheating on her about the same time that she was going to tell him she was pregnant, leading her to get an abortion. We only discover these details after Rob’s friend Liz does, who in a great scene tells him off in a way only a real life sister could. Just watch how she walks into the room.

Rob’s fourth wall breaking reaction? Yeah, maybe I should have mentioned that.

Unbelievably, even this revelation wasn’t the low point for me. Rob’s total narcissism becomes apparent when he finally works down the list to his first girlfriend. Talking to her now, he realizes she didn’t break up with him, but he broke up with her — because she wasn’t ready for sex. In their meeting, she admits to him that his breakup left her so scarred she didn’t have sex for years, and when she finally did, it felt like a rape: she was just too tired to fight the guy off anymore, so while she didn’t want it, technically, she said, she consented. Horrifying? Yes. Absolutely. But somehow it gets worse. After Rob hears this story, he immediately — immediately — moves on to this insight: she didn’t break up with him! He broke up with her! So there was nothing wrong with him! Worst of all, he never revisits his complete, almost inhuman lack of empathy for this woman. He just moves on.

That’s when Rob’s total narcissism hit me. Rob’s narcissism isn’t a developmental stage, a personality quirk, something he’ll outgrow. At this point in his life, it’s his defining characteristic. Viewers learned Rob’s history with his first girlfriend about the same time Laura starts showing renewed interest in Rob. By this point, I’m not rooting for him anymore, I have zero empathy for this character, and I wish Laura would run away screaming and keep it up. But, no. She’s interested in him. When he finally wins her back, he has experienced some growth, but he’s still a fundamentally narcissistic character.

So when he proposes to her near the end of the film, she laughs and says, “No.” There’s no explanation. She just knows he’s too big a manchild to be ready for marriage, but he has convinced her of his commitment to her, so she’s willing to give him some time. The film assumes viewers understand her reaction, and she stays with him. She’s also encouraging him to engage in growth behaviors: producing an album for a couple of talented teenagers who stole records from his store, and then starting to DJ, so that he’s doing something himself and not just trading in other people’s creativity. She understands him in ways he doesn’t understand himself.

High Fidelity now (2020) was a single season remake of the film that was cancelled after a ten episode run. It’s very hard to compare the two because the full story wasn’t told — no, a ten episode series arc didn’t complete the narrative arc covered in the film. The series was cancelled just as the main character, Rob Gordon (perf. Zoë Kravitz), starts to face her own narcissism, but it doesn’t follow the story through to the end.

The 2020 series substitutes a thirtysomething black woman for a thirtysomething white man. The series’ opening scene is almost shot for shot and word for word a reshoot of the opening scene of the film, except a black man is leaving a black female record store owner instead of a white woman leaving a white male record store owner. She even slouches in her chair the same way Cusack’s Gordon does.

There’s a lot to chew on here. Are they really interchangeable in how they would react to a breakup? Kravitz doesn’t perform cultural blackness the way her friend/employee Cherise (perf. Da’Vine Joy Randolph) does. I’m not complaining about the change in character — Kravitz does a great job, and I enjoyed watching her in this series. But are there really no sex and race differences at play here? Is Rob Gordon as a white man interchangeable with Rob Gordon as a black woman of the same age?

I can’t completely answer that question because Kravitz pulls it off, and I was disappointed that the series was cancelled before the whole story could be told. It’s possible, as one Guardian reviewer said, that the problem with the series was that it stretched out a 2.5 hour film to a 5 hour series that still wasn’t finished telling its story. I wish they’d completed the story arc in their ten episodes. I suspect that the completion of the film arc was meant to be left for season 2, ending with a reconciliation and marriage proposal that was refused, and that season 3 might lead up to a marriage. Child in season 4? I have to ask how much mileage you can get out of a reformed narcissist. Marvel’s Loki had to reform much more quickly than that within the space of his first season.

My only hint of an answer to the question about the interchangeability of these actors comes from another review, which I won’t link here. This well-intentioned reviewer complained about the series’ cancellation and argued for its return this way: we need to keep High Fidelity the series because Kravitz’s character was “so much more likeable” than Cusack’s, who was a horrible person. My first reaction to this review was to think, “And that’s why we can’t have nice things.” The horrible narcissism of Cusack’s Rob Gordon was the point of the film, which is a film about narcissism, immaturity, and growing out of it. It’s a film about what men do to women and why. I think that is where sex and race differences come into play: Kravitz as the main character, as a black woman, was not allowed to be that completely horrible. She took this character a long way, don’t get me wrong. But at least as of the ending we saw, she never hit that low.

But I don’t think she ever would have hit bottom quite that low. In a small narrative arc within the series around mid-season, Kravitz’s Rob Gordon was offered a unicorn of a record collection, one containing individual albums which, by themselves, were worth well over $2000 in some cases (such as the Beatles’ butcher cover version of Yesterday and Today) for only $20.00 by an angry ex wife. Not $20.00 per album. $20.00 for the entire collection, which had to be worth at least close to ten thousand dollars. Rob couldn’t accept. She tried. She tracked down the white male husband at his favorite bar, was treated rudely and dismissively by him even while he was getting things wrong, and she still couldn’t accept the offer because “music is for everyone.” If you’re not sure about this scene, by the way, Michelle, the owner of Savvy Vinyl Records in Melbourne, FL, who told me to watch this film (and who couldn’t believe I hadn’t seen it — so she ordered me to watch it), has related a number of these stories herself over the few short years I’ve known her. I think I recall witnessing one of these conversations myself.

On the one hand, this decision says quite a bit about Kravitz’s version of this character and her core beliefs, especially as a record store owner. On the other hand, in this series a black woman was absolutely not allowed to get the upper hand on a white man who completely deserved it. Cusack’s Rob Gordon probably would have taken the deal, but I’m not sure. I totally would have taken the deal. I think this scene is the product of our expectations of black women if they’re going to be leading characters. We can accept a narcissistic white male as a lead character because of… all of human history, but a black woman had better be empathetic if she’s going to be the center of a TV series. These narrative decisions are also the product of our review and audience environment. People, in general, don’t understand the point of characters whose flaws border on evil unless they die in the end.

Maybe she would have been this character by the end, this flawed and awful. Maybe the series would have had the courage to go this way. We can never know.

%d bloggers like this: