Oscar Wilde’s “De Profundis”

A Cento by James Rovira

Suffering is one

Time itself does not progress | It revolves.
It circles round one center of pain.

There is only one season | the season of sorrow.
It is always twilight in this prison cell and in this heart.

In the sphere of time and thought | motion
is no more.

I say to myself that I ruined myself,
that nobody great or small can be ruined
except by his own hand.

Suffering is permanent, obscure, and dark
and has the nature of infinity.

The only people I would care to be with
are artists and people who have suffered:
those who know what beauty is,
those who know what sorrow is:

no one else interests me.

When you really want love
you will find it waiting
for you.

Ballad of a Selfish Boy

O what can ail thee, knight-at-arms
Alone and palely loitering?
The sedge has withered from the lake,
And no birds sing.

O what can ail thee, knight-at-arms,
So haggard and so woe-begone?
The squirrel’s granary is full,
And the harvest’s done.

I see a lily on thy brow,
With anguish moist and fever-dew,
And on thy cheeks a fading rose
Fast withereth too.

I met a lady in the meads,
Full beautiful—a faery’s child,
Her hair was long, her foot was light,
And her eyes were wild.

I made a garland for her head,
And bracelets too, and fragrant zone;
She looked at me as she did love,
And made sweet moan

I set her on my pacing steed,
And nothing else saw all day long,
For sidelong would she bend, and sing
A faery’s song.

She found me roots of relish sweet,
And honey wild, and manna-dew,
And sure in language strange she said—
‘I love thee true’.

She took me to her Elfin grot,
And there she wept and sighed full sore,
And there I shut her wild wild eyes
With kisses four.

And there she lullèd me asleep,
And there I dreamed—Ah! woe betide!—
The latest dream I ever dreamt
On the cold hill side.

I saw pale kings and princes too,
Pale warriors, death-pale were they all;
They cried—‘La Belle Dame sans Merci
Thee hath in thrall!’

I saw their starved lips in the gloam,
With horrid warning gapèd wide,
And I awoke and found me here,
On the cold hill’s side.

And this is why I sojourn here,
Alone and palely loitering,
Though the sedge is withered from the lake,
And no birds sing.

Yes, it’s John Keats’s “La Belle Dame Sans Merci” (1819), but I think it’s better titled “Ballad of a Selfish Boy” because she ghosted him (ha, literally) and he never once thought of how she was feeling, just what he lost.

AI in the Classroom

I’ve long been an early technology adapter. I started building webpages/websites in the 90s, used turnitin.com not just for plagiarism checking but for providing rich feedback on student papers in the very early 2000s, and around that time had students write live blogs on a course website as part of their writing requirements. I didn’t catch up to the iPhone until the iPhone 3, but I owned a first generation iPad that I used for years, as well as a first gen Apple Watch that I used until last year. Additionally, I’ve been teaching online courses and building online courses and curriculum since January 2008 as part of my regular course load. I’m not particularly excited about technology, just curious. I like to see what it can do, what results it can produce, and I’ve been doing so for almost my entire 23 year college teaching career.

I’ve witnessed a number of very naive attitudes toward technology in the classroom during this time as well: institutional requirements to use it with no clear sense of purpose or pedagogical goals attached; college presidents threatening to fire faculty who didn’t use technology in the classroom; instructors so desperate for recognition they try to position themselves as innovators because they use Prezis, administrators so similarly desperate they buy it because some salesman said the words “student centered” and “innovative” in the same sentence, and on and on. Overall, institutional attitudes toward the use of technology in learning have been more often uncritical, unfocused, and directionless than not: people think they must use it, but they have no clear sense of why, either because they don’t understand the technology, or how people interact with it, or even how teaching works. Most often, they don’t understand all three.

I would like to say I’m no Luddite. At least not yet. I do not at present fear or hate technology. And I know of people who have benefitted from different kinds of educational technology immensely. One woman with dyslexia comes to mind, who once shared with me that specific computer programs helped her manage her dyslexia so she could make it through her Ph.D. program. Additionally, some forms of technology are ubiquitous in the workplace, so students need to develop fluency with them while in college: Microsoft Office, for example, or the Adobe Creative Suite, or AutoCAD. Some programs of study are focused on developing proficiency with the technology itself, such as Radiology programs, and some uses of technology are eminently practical, like keeping student grade books in a learning management system of some kind so that students can see their grades at any given time just by logging in. So I’m not in any way advocating for the elimination of technology from education as some kind of ideal. That’s not only unrealistic, but it’s undesirable.

I’d like to consider other uses of technology, though: uses that aren’t absolutely necessary for the field or the workplace but are supposed to provide some kind of pedagogical benefit. That’s an entirely different use of technology in the classroom. Many of the previously mentioned programs or tech were inherent to the field: proficiency with the technology itself, in those cases, is among the instructional goals, which is why we take the time to teach the technology itself. But this other kind of use involves teaching something else with the technology rather than teaching the technology itself. It’s using technology to teach another subject that’s not at all dependent upon the technology.

In those cases, we need to seriously consider our use of the technology, because the technology is always a barrier between the student and course goals: the student must move through the technology to achieve course goals even though the technology isn’t inherently necessary to the course goals. You can of course respond by saying, “Then make them inherent to the course goals!”, but that’s missing the point. If we don’t have to, why should we? Can you answer that question in any detail beyond trivial generalities about technology in the classroom?

Let’s get specific: I have students use Microsoft Word all semester long in my writing and literature classes. Word is so ubiquitous in the workplace that I don’t mind doing so, and I take the time to provide some instruction in Word to teach students how to format documents different ways. But using Word has nothing to do with the course goals of a writing or literature course. I could spend the rest of my teaching career using printed books, pencil, and paper in my classroom and not feel like I’m sacrificing my pedagogical goals for the course: the course is really about developments in cognition brought about by intensive reading of difficult, creative texts and students grappling with expressing their own ideas about them. And in between all of that, in every literature class, students are ultimately interpreting a person of some kind: a fictional person, usually, but still a person, and what field does not require us to interpret people almost all day long?

I teach my students that writing is a skill, and as such, you only develop it with practice. I tell my students that I can’t teach them how to write by talking to them while they passively listen. I do indeed lecture about writing, but the lecture by itself isn’t enough. My sixth grade baseball coach taught his team how to swing a bat with a video, but he knew the video by itself didn’t teach us how to swing a bat. The lecture and the video were the beginning of the instructional process, so I didn’t really learn how to hit a baseball until I practiced it, especially with my coach giving me corrections at first. I had to do it to learn it, just like people who learn to play a musical instrument spend hours practicing scales. Writing is a skill like that: you learn it by doing it.

But, in the end, I use Word in the classroom because it’s a useful tool and many students in many fields will need to use Word somehow in their future careers, even if only to write a résumé and cover letter. But what about other uses of technology? Do you really need it? Will students spend more time trying to master the technology than master the course material? If they do, what’s the payoff for what you’re sacrificing? How often do we even stop to ask these questions, much less answer them?

And now we come to AI. If you’ve been following discussion of ChatGPT since its release last year, there’s been quite a bit of hysteria and mystification about this program across social media: it’s either going to be the end of teaching as we know it or will revolutionize teaching forever; it will be the end of humanity or transform humanity forever; it’s an alien intelligence; it’s the singularity. It’s none of these things and will do none of these things: ChatGPT is in fact a big, fast calculator for which words in any human sense do not and cannot exist.

It is, however, a very impressive calculator and can indeed do quite a bit very quickly, so it’s a potentially useful tool and, like all other potentially useful tools, a potentially dangerous one. Not that it will attain consciousness and turn on us, but that we might rely on it in ways we shouldn’t with unexpected, undesirable results. The immediate “danger” to writing instruction is that students will use it to plagiarize. While ChatGPT is subject to about 500-800 word limits on its coherence, a number of consecutive prompts can be used to generate most student length papers that are, of course, easily detectable by a number of services or even just an attentive instructor: ChatGPT writes in an easily identifiable voice.

A more careful student, “Owen Kichizo Terry,” which I assume is a pseudonym for a real undergraduate student at Columbia University (but maybe not), in “I’m a Student. You Have No Idea How Much We’re Using ChatGPT” describes a less detectable (or undetectable) use of ChatGPT: the student provides prompts for outlines and then writes an essay following the outline. It’s unclear to me how much work the student is really saving short of having to come up with an idea of his own. He still has to write the entire paper. Near the end of his essay, he claims he sees a number of students doing the same thing, saying

At any given time, I can look around my classroom and find multiple people doing homework with the help of ChatGPT. We’re not being forced to think anymore.

People worry that ChatGPT might “eventually” start rendering major institutions obsolete. It seems to me that it already has.


There are of course a number of errors in the student’s thinking, but I’d like to say first that he’s a student. If he’s a first year undergraduate, he’s just about old enough to be my grandson. So I’m inclined to give this student a pass; not on cheating, but on having a bunch of wrong, frankly idiotic ideas. We all do when we’re 18. That’s fine.

His first wrong idea is that he doesn’t have to do his own thinking anymore. Filling out the outline with his own version of that content requires him to think. He’s exempt from developing a thesis and supporting ideas in the form of bullet points, but he’s not exempt from thinking, as developing a fully written paper even from a preexisting thesis will inevitably require his own thinking.

It’s also prima facie ridiculous to think that because he can cheat his way through first year writing that Columbia University is now obsolete. He’s being dramatic, of course: he has no idea how rigorous the student learning becomes further up the food chain, or the important research being carried out that he will never see.

Next, he’s mistaken (but probably not alone in this) in thinking that his is a novel form of plagiarism made possible by ChatGPT. You don’t need a computer to commit this kind of plagiarism, just a library. Find an obscure book that hasn’t been checked out in twenty years, outline part of its argument, and then write a paper based on that outline. Preventing this kind of plagiarism is one of the reasons why we have qualified, well-read faculty: at one time we believed there shouldn’t be a paper or book out there that the faculty member hasn’t read if it’s in his or her field, so sharp faculty members would recognize these ideas from their previous reading and catch the student. But you know what the student still has to do with this kind of plagiarism? Read a book, understand it, and then write a paper. Of course an AI generated thesis may not be identifiable from prior reading (or it may), but it’s still essentially the same form of plagiarism, and I also have to wonder how often the AI will repeat itself, and why it shouldn’t.

The student is also mistaken in thinking he’s representative of many students in the country. I’ve spoken to faculty in other fields who are beginning to incorporate ChatGPT in their instruction, and they’re reporting that students seem afraid of the technology. Ivy League students and instructors tend to presume that they represent students across the country: what blessed ignorance. Teach at a community college for a year and get back with me. I don’t think at present use of ChatGPT to cheat in quite this way is widespread. It’s probably more ubiquitous among those who feel privileged, entitled, and under a great deal of pressure to perform at a high level, all of which characterize students at higher level institutions more than at lower level institutions. All students feel pressure to perform at some time: students that I’ve caught plagiarizing often did so for this reason, but the ubiquity of cheating varies greatly by institution.

The student does make some good suggestions for defeating this kind of plagiarism:

If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.


But he’s mistaken in thinking that we haven’t all already thought about it, or that we aren’t already doing it. Oral exams and in-class writing are already widely used and have been for years. Decades. Literally, centuries. At the doctoral level, these assessments (in the form of qualifying exams and then the dissertation) are often used to gauge the student’s knowledge, to ensure that the student possesses this knowledge him or herself. Can we use them more often? Some instructors certainly could. I certainly could.

The student, being a kid, seems oblivious to the fact that no one will feel inclined to respect his opinions once he’s admitted that he plagiarizes his papers regularly, but he does seem concerned that we do something about it, which is commendable. But, why doesn’t he? What kind of entitlement compels him to cheat just because he can get away with it? Does it gratify him to feel smarter than his teachers by defeating the prompts and breaking the rules? This is all very childish thinking, but then again, we’re dealing with a child — but one, I should say, who already writes very well. He is sadly the ignorant beneficiary of an educational system that has left him, right out of high school, with skills more highly developed than most students in the country. But still, his highly qualified and accomplished college teachers do not need his advice. There’s probably very little that he’s said that they haven’t already considered.

The real tragedy of plagiarism remains unsaid: if writing, reading, and thinking are skills that are only developed through practice, plagiarism is an act by which students rob themselves of the benefit of their education: the knowledge and skills gained, the cognitive development. Students are spending thousands of dollars — tens of thousands of dollars — to deprive themselves of their own education, and in the end they will pay for that loss themselves.

He’s not cheating his teachers, his school, or his parents, just himself, and that is the one thing that he, and all other students, need to know about plagiarism, and something that everyone needs to consider before incorporating any kind of technology in the classroom. Absolutely teach the tech itself if your field demands it. But don’t teach using the tech until you’ve asked some difficult questions first.

AI and Talking Heads, Part III: Why Sentient Machines Will Never Exist

In my previous post, I made a distinction between “Strong” AI and “Weak” AI and then went on to describe how Weak AI works. I explained that Large Language Models such as ChatGPT convert words to numbers and then calculate the statistically most probable words and sentences that will follow any given words and sentences. This set of numerical values is then translated back into text and displayed on your screen. As a result, words (and language) don’t exist for computers. Computers can’t read in any sense comparable to human beings. They are always functioning like very big and fast calculators.

This post will be about Strong AI, or about the possibility of a machine intelligence gaining human-like consciousness, becoming self aware, and then acting out on its own autonomous will. I’ve been thinking about this possibility since the first Matrix film came out at the end of March 1999. Some time in 1999 I published my first short essay about The Matrix, and then that fall, I started graduate school. My second semester, Spring 2000, I attended Cassandra Laity’s literary theory class and read Baudrillard, whose Simulacra and Simulation was required reading for cast members in The Matrix. I wrote my final paper for this class on Baudrillard and the first Matrix film, an essay I revised a number of times after each new installment in the series appeared. My final version of that essay was published in the International Journal of Baudrillard Studies in July of 2005 (after getting shot down by PMLA).

Besides my own interest in the film — and I’m embarrassed to say how many times my family and I saw The Matrix at the Oveido Mall when it first came out (more than 5, less than 10) — once I realized The Matrix was a retelling of Mary Shelley’s Frankenstein, I began to see these retellings everywhere. MetropolisA.IBicentennial ManStealth (Frankenplane!). I, Robot. And then later, Ex Machina, which intrigued me enough to write about it for SequartEx Machina drew me in because it explored the intersections between the Frankenstein story and sexual politics, which was an important part of Shelley’s novel but missing from all other retellings that I can recall offhand. Regardless of specific thematic content, these stories all ended unhappily, either for man or machine or, often, for both, and I wanted to know why. Why do we consistently imagine that if humanity were to create an artificial consciousness like ourselves that the end of this story would be tragic?

In my dissertation, I called this fear of the possibility of our own creations “creation anxiety,” and I eventually revised my dissertation into my first book, Blake and Kierkegaard: Creation and Anxiety. My intention was to cover stories reflecting creation anxiety from British poet and printmaker William Blake’s The [First] Book of Urizen in the 1790s through Shelley’s FrankensteinR.U.R.Metropolis, and The Matrix, but I never made it past William Blake. Blake’s Urizen books seem to me to be seminal works in English literature on the failure of a subcreation, so I used Blake’s mythology as the basis of my examination of creation and Kierkegaard’s The Concept of Anxiety to provide a relevant, applicable, and complex concept of anxiety.

I quickly realized that the possibility of actually creating an artificial consciousness wasn’t the real focus of my study. My study focused instead on human reactions to that possibility, or our own consistently expressed anxieties about the possibility of human beings creating an artificial consciousness. But I would like to look through the telescope the other way this time and focus on the possibility of an artificial sentience itself, especially how it is imagined in different works of fiction, mostly drawing from film and television. And I would like to start with the caveat that I don’t understand how this technology works. Not because I’m technologically illiterate, but because it doesn’t exist and has never existed, so no one knows. Not only that, we have such a hard time defining human consciousness that it’s hard to explain what that would look like in artificial form.

Many of those who speak authoritatively about Strong AI, or about a machine attaining consciousness, are quite often confidently lying to you about their knowledge and probably also lying to themselves. They might think they know, but they don’t, or they know they don’t know and are choosing to lie about it just to position themselves as some kind of “thought leader” (euphemism for “salesman,” “con artist,” or “liar”). The fact is we haven’t seen Strong AI yet, so we can’t know what form it might take, and if somehow it ever were to happen, we probably wouldn’t recognize it. When a CEO of a tech firm recently said in an interview that current AI isn’t capable of consciousness, he added that it would be like an alien intelligence if it appeared. He only meant that it would be unrecognizable to human beings as a consciousness, but some people are ludicrously using AI as an acronym for “alien intelligence.” My article on Ex Machina emphasized this state of not-knowing: in the film, the CEO who finally created a Strong AI did so through a series of artificial creations stylized as human women, which only emphasized the lack of real women in that man’s life. My point was that this man doesn’t understand human women well enough to have a long term relationship with just one, so how can he understand an artificial consciousness?

We can know, however, what form we imagine an artificial consciousness might take, and I think our imaginations teach us quite a bit about human consciousness if nothing else, which is the first step in understanding what a machine consciousness might look like and, ultimately, why it will very likely never exist. In my overview of creation anxiety stories, I’ve found that almost all of them follow either gnostic or organic paradigms. If “brain” is the gray matter in our heads and “mind” is our conscious self-awareness, the gnostic paradigm imagines that brain is equivalent to mind and that the only mind is the brain. Since in this model the brain is an organic electrical device, a wet CPU of some kind, if we could duplicate the electrical patterns in the brain, we could duplicate human consciousness, and similarly, if a computer’s electrical patterns started resembling that of a human brain, it too would attain consciousness. The TV series Black Mirror is the clearest advocate for the gnostic paradigm that I’ve encountered so far. In several episodes, people’s brain patterns are literally copied from a living person into artificial environments, some of them small enough to hang on a keychain, and they sometimes even exist within another person’s head, alongside the original consciousness. In each case, the newly created consciousness is a literal duplicate of the original’s consciousness, although from that point it begins to have its own experiences and create its own memories.

Most creation anxiety stories are working with a version of this gnostic paradigm. The MatrixA.I.MetropolisBlack MirrorStealthThe Terminator, and many other films work from the premise that a purely mechanical-electrical device somehow attains consciousness. But a few work from an organic paradigm. Bicentennial Man does and perhaps has the most benign, though sad, ending of them all. In this film, Robin Williams plays a robot named Andrew who seeks to be recognized by the human world as a person, not just a machine. The film makes many of the same narrative moves as any other film — humans resist, are close minded, there’s some hostility — but catastrophe is averted because Andrew never quits trying to gain acceptance by humanity as a person (Frankenstein’s Creature, on the other hand, did quit and started murdering people). Andrew begins with a metallic body which is gradually rebuilt into an artificially developed organic body. In the end, Andrew realizes he must accept mortality in order to be fully human, so he takes the last step — a full transfusion of human blood — which will allow him to age and then die. The human race finally recognizes him as a person on the day that he dies. Immediately after he dies, in fact, so that he lived a fully human life, one characterized by a lifetime pursuit of an unfulfilled goal.

Bicentennial Man is an example of the organic model for artificial consciousness because it recognizes the importance of the body to sentience. It doesn’t really explore why the body is important to sentience, though — just why it’s important to human acceptance and recognition — because Andrew wanted that recognition before he had anything even resembling a human body. But there’s another, similar character arc that’s much more suggestive of how and why the human body is important to sentience: the character Data from the television series Star Trek: The Next Generation and related films. Data, like Andrew, is an advanced robotic being with a “positronic brain,” an idea developed by Isaac Asimov for I, Robot and reused in Bicentennial Man (another Asimov brain child, so to speak), Star Trek, and a number of other films and television series, including some that weren’t written by Asimov, like Star Trek. Data, initially, lives without emotion, but somehow has great “curiosity” about human beings and their emotional lives. Early in the series he makes it his goal to understand more and more about human experience so that he can become more human.

Data’s development throughout the television series and subsequent films takes a number of dramatic turns. First, he installs an “emotion chip,” a dormant feature in his construction that, once activated, would allow him to experience human emotions. Until that point he was very much like Mr. Spock from the original series: more like a walking logic machine than a person. Data’s character is then used within the series to comment on the experience and development of human emotion with alternating humor and pathos — a great development for the show. The next turn occurs later in Data’s history, when the Enterprise is captured by the Borg and Data is assimilated. Since Data is already an entirely artificial being, in order to transform him into a cybernetic organism (cyborg or “Borg”), they have to graft human skin onto him, covering part of his head and arm. That, combined with Data’s emotion chip, was his most radical step in his transformation from machine to person. The Borg were ultimately defeated when Data released a chemical agent that consumed human flesh, which meant he had to sacrifice his own newly acquired skin. But he said afterwards that he had never felt so close to being human as when he had skin.

I think at this point Star Trek has provided its most important insight into human consciousness and placed the difference between gnostic and organic models of consciousness in stark relief. There’s more to consciousness than the human brain. It’s not just a series of electrical patterns in a meat CPU. Consciousness is a function of the entire body, as the brain is extended throughout the body via the nervous system, so that consciousness is not isolated in the head. Through the body and its sense organs, consciousness exists in a continual state of intercourse with its external environment: we are never not hearing and feeling, at the least, and probably never not smelling either. For this reason, I believe our organic nature is essential to our consciousness. Because we’re continually aware of our immersion in an external world, so much so that our external worlds are inextricably and inescapably a part of our internal environments, we can eventually become aware of our difference from the external world. At that point, we become persons.

But this development only occurs because of our organic bodies. Light and temperature sensors aren’t skin and eyes. Sensors provide information, but they don’t connect the device to its environment in an inescapably self-defining way. Because of that, no machine will ever gain sentience.

I know you’re thinking, “But at some point in the future, why couldn’t….?”

No. A bigger and faster calculator will never be more than a calculator, no matter how fast it works.

And now you might think I’m being close minded, like the evil or at least unlikeable characters in Star Trek and other films. I suspect you want to be open to the possibility of the gnostic model being true because of your ignorance, not your knowledge. The human brain is a big black box, and so are computers, so you think anything in one big black box could just as well be in another. 

Now just go back and read through this essay again and ask yourself if all big black boxes are really alike.

Why isn’t the organic good enough? I believe an ancient text sheds some light on this question:

Their idols are silver and gold, the work of men’s hands.
They have mouths, but they speak not: eyes have they, but they see not:
They have ears, but they hear not: noses have they, but they smell not:
They have hands, but they handle not: feet have they, but they walk not: neither speak they through their throat
They that make them are like unto them; so is every one that trusteth in them.

Psalm 115: 4-8

This post is part three of a three part series. You can read Parts I and II here as well.

AI and Talking Heads, Part II: Why Machines Can’t Read

I’d like to talk about a distinction between two kinds of artificial intelligence (AI): “Strong” AI and “Weak” AI. StrongAI is AI that has attained artificial consciousness — the machine has become sentient and thinks for itself in a way comparable to a human being. Strong AI has been the subject of numerous films and works of fiction since Czech writer Karel Čapek’s 1920 play R.U.R. That play coined the word “robot,” derived from the Czech word “robota,” which means “meaningless work or drudgery.” Strong AI will be the subject of my next post. I’m going to write about Weak AI here.

Weak AI is artificial intelligence that produces an output that resembles something a human being would produce, but it’s one that of course does not have any kind of human consciousness. So you can multiply 6 x 7 in your head and produce an answer (“56,” right?), and so can a computer (but it’d probably say “42”). There’s no question that we’re dealing with an inert object in the case of Weak AI no matter how much processing power is behind it or how complex the tasks that it can perform. It’s just a machine running a program. And the most important thing about AI text generators such as ChatGPT is that the machine is literally incapable of reading.

I’d like to preface my remaining comments by saying that I don’t understand how Large Language Models such as ChatGPT work. I can code in HTML and XML, and I understand some of the basics of computer hardware and software, but I couldn’t write or debug the code for any Large Language Model, so I prefer to say that I don’t understand how the technology works. I think more people writing about ChatGPT need to preface their posts with that caveat. What I will do is summarize what other people with that kind of competence have said about how Large Language Models work, some of which I’m taking from the Digital Humanist listserv, a Google Group dedicated to Digital Humanities, which is the use of computer technology in humanities scholarship. Most of what I’m going to say, however, is widely known. I should also add that what I say below is a very simplified account of technology that has developed in complicated ways, but I believe it still presents an accurate general picture.

Almost all computers work by processing bits, which are basic units of information consisting represented by a 1 and a 0. 1 and 0 signify on and off positions within the computer’s hardware that are being used to function as true/false or yes/no states. Moving up the chain, a byte is generally made up of 8 bits, and it was originally defined as the number of bits needed to represent a single character of text (say, a letter or number) in a computer. So a single letter is made up of a series of eight 1s and 0s, and words are made up of strings of these 1s and 0s put together. You can see the strings in the “bin” column on this ASCII-II table: capital A, for example, is 01000001. The word “dog” would be 011001000110111101100111.

Now you can imagine the kind of processing power needed if a computer were to analyze millions of words character by character, especially including every punctuation mark and space. To simplify the process and reduce the processing power needed, computer programs that analyze text are trained to assign numerical values to “meaning units” (such as -ing and -ly endings in English) called n-grams. So “the cat sat on the mat” might have n-grams associated with “the cat,” “cat sat,” “sat on,” “on the,” “the mat,” etc. Large Language Models are trained to recognize n-grams in a very large collection of text (say, all of Wikipedia), associating each n-gram with a numerical value, and then calculating the statistical probability of the next word (or n-gram) based on the previous ones before it. What’s really advanced about ChatGPT and other Large Language Models is that they calculate the statistical probability not just of the next word, but of the next sentence or sentences based on its training. It’s a massively powerful program.

Now take a step back and look at the entire process: words or bits of words or small groups of words are turned into numbers, and numbers are turned into sequences of 1s and 0s, and the rest is calculation.

At what point does any of this resemble reading in the normal human sense, where words are associated with physical things in the world, or with sensations, emotions, or concepts? Or memory, or a combination of all of these? Actually, nowhere. ChatGPT is a big calculator trained to convert words to numbers, run a statistical calculation and the most probable numbers coming up next, and then spit out numbers that are then translated into text output. At no point does it have any concept of meaning in any human sense of the word. As a powerful iteration of Weak AI, though, it does a great job resembling a human speaker.

And that is why computers can’t read. There is literally no understanding of the text in a human sense because words as words, as language, doesn’t exist for computers.

This post is part two of a three part series. You can read Parts I and III here as well.

%d bloggers like this: