Handwringing, panic, and palm sweating has surrounded public discussion of ChatGPT since OpenAI announced its release in November 2022. Since then, “thought leaders” have been excitedly making declarations about this new technology, claiming that it’s inaugurating the end of humanity, that it’ll end education or completely transform it, that it is a kind of consciousness or that it has attained independent thought, and most recently, that it’s more like an alien intelligence than a human intelligence (never mind that we’ve never met an alien intelligence, so how can we compare?).
I’ve been studying the possibility of human creations attaining independent will and consciousness — becoming intelligent in a human sense and what that would mean — since about 1999. I first published on the topic around 2001 and published a book-length study of the topic in 2008. I’ll go into the history of that study later, but I’d like to preface my comments here with a very important disclaimer:
I don’t know how this technology works.
Before you rush in with an Oh but I do, let’s think about what understanding this technology really means:
- Can you write any of the code that makes it work?
- Can you read the code (different chunks of it, anyway) and understand what it’s set up to do without being told in advance?
- If it broke, could you fix it?
- Do you even know much of anything about how computers work between the keyboard and the screen? Even if you can answer “yes” to this question, do you really know anything about programming large language models?
I can read descriptions of how this technology works as well as anyone. I understand them. I can repeat them back to you in my own words. That’s different from understanding how this technology works. I have some proficiency writing .html (but so what? One of my sons developed that proficiency in middle school) and .xml. So before you read commentary on ChatGPT, ask yourself if the person writing it actually knows anything about this technology, or if they’re only trying to sound like they know what they’re talking about. Remember that we’re in a media environment that rewards attention-getting headlines, and thought leaders only become thought leaders by generating these headlines.
Imagine how different our media environment would look if everyone had to preface what they wrote with an honest declaration of their knowledge of the subject. How many articles and editorials about ChatGPT would start with the phrase, “I don’t know how this technology works”? I’m asking because I think most of them should.
I’m going to be writing more about this topic later. Over my own twenty years of study of this topic, I’ve learned that my real focus of attention is not on the possibility of machines attaining consciousness in something like a human sense, but on what people say about technology and how they react to it, both individually and socially. I will be writing what I know about, in other words.
But I don’t know how this technology works.
This post is part one of a three part series. You can read Parts II and III here as well.
2 thoughts on “AI and Talking Heads”