AI in the Classroom

I’ve long been an early technology adapter. I started building webpages/websites in the 90s, used not just for plagiarism checking but for providing rich feedback on student papers in the very early 2000s, and around that time had students write live blogs on a course website as part of their writing requirements. I didn’t catch up to the iPhone until the iPhone 3, but I owned a first generation iPad that I used for years, as well as a first gen Apple Watch that I used until last year. Additionally, I’ve been teaching online courses and building online courses and curriculum since January 2008 as part of my regular course load. I’m not particularly excited about technology, just curious. I like to see what it can do, what results it can produce, and I’ve been doing so for almost my entire 23 year college teaching career.

I’ve witnessed a number of very naive attitudes toward technology in the classroom during this time as well: institutional requirements to use it with no clear sense of purpose or pedagogical goals attached; college presidents threatening to fire faculty who didn’t use technology in the classroom; instructors so desperate for recognition they try to position themselves as innovators because they use Prezis, administrators so similarly desperate they buy it because some salesman said the words “student centered” and “innovative” in the same sentence, and on and on. Overall, institutional attitudes toward the use of technology in learning have been more often uncritical, unfocused, and directionless than not: people think they must use it, but they have no clear sense of why, either because they don’t understand the technology, or how people interact with it, or even how teaching works. Most often, they don’t understand all three.

I would like to say I’m no Luddite. At least not yet. I do not at present fear or hate technology. And I know of people who have benefitted from different kinds of educational technology immensely. One woman with dyslexia comes to mind, who once shared with me that specific computer programs helped her manage her dyslexia so she could make it through her Ph.D. program. Additionally, some forms of technology are ubiquitous in the workplace, so students need to develop fluency with them while in college: Microsoft Office, for example, or the Adobe Creative Suite, or AutoCAD. Some programs of study are focused on developing proficiency with the technology itself, such as Radiology programs, and some uses of technology are eminently practical, like keeping student grade books in a learning management system of some kind so that students can see their grades at any given time just by logging in. So I’m not in any way advocating for the elimination of technology from education as some kind of ideal. That’s not only unrealistic, but it’s undesirable.

I’d like to consider other uses of technology, though: uses that aren’t absolutely necessary for the field or the workplace but are supposed to provide some kind of pedagogical benefit. That’s an entirely different use of technology in the classroom. Many of the previously mentioned programs or tech were inherent to the field: proficiency with the technology itself, in those cases, is among the instructional goals, which is why we take the time to teach the technology itself. But this other kind of use involves teaching something else with the technology rather than teaching the technology itself. It’s using technology to teach another subject that’s not at all dependent upon the technology.

In those cases, we need to seriously consider our use of the technology, because the technology is always a barrier between the student and course goals: the student must move through the technology to achieve course goals even though the technology isn’t inherently necessary to the course goals. You can of course respond by saying, “Then make them inherent to the course goals!”, but that’s missing the point. If we don’t have to, why should we? Can you answer that question in any detail beyond trivial generalities about technology in the classroom?

Let’s get specific: I have students use Microsoft Word all semester long in my writing and literature classes. Word is so ubiquitous in the workplace that I don’t mind doing so, and I take the time to provide some instruction in Word to teach students how to format documents different ways. But using Word has nothing to do with the course goals of a writing or literature course. I could spend the rest of my teaching career using printed books, pencil, and paper in my classroom and not feel like I’m sacrificing my pedagogical goals for the course: the course is really about developments in cognition brought about by intensive reading of difficult, creative texts and students grappling with expressing their own ideas about them. And in between all of that, in every literature class, students are ultimately interpreting a person of some kind: a fictional person, usually, but still a person, and what field does not require us to interpret people almost all day long?

I teach my students that writing is a skill, and as such, you only develop it with practice. I tell my students that I can’t teach them how to write by talking to them while they passively listen. I do indeed lecture about writing, but the lecture by itself isn’t enough. My sixth grade baseball coach taught his team how to swing a bat with a video, but he knew the video by itself didn’t teach us how to swing a bat. The lecture and the video were the beginning of the instructional process, so I didn’t really learn how to hit a baseball until I practiced it, especially with my coach giving me corrections at first. I had to do it to learn it, just like people who learn to play a musical instrument spend hours practicing scales. Writing is a skill like that: you learn it by doing it.

But, in the end, I use Word in the classroom because it’s a useful tool and many students in many fields will need to use Word somehow in their future careers, even if only to write a résumé and cover letter. But what about other uses of technology? Do you really need it? Will students spend more time trying to master the technology than master the course material? If they do, what’s the payoff for what you’re sacrificing? How often do we even stop to ask these questions, much less answer them?

And now we come to AI. If you’ve been following discussion of ChatGPT since its release last year, there’s been quite a bit of hysteria and mystification about this program across social media: it’s either going to be the end of teaching as we know it or will revolutionize teaching forever; it will be the end of humanity or transform humanity forever; it’s an alien intelligence; it’s the singularity. It’s none of these things and will do none of these things: ChatGPT is in fact a big, fast calculator for which words in any human sense do not and cannot exist.

It is, however, a very impressive calculator and can indeed do quite a bit very quickly, so it’s a potentially useful tool and, like all other potentially useful tools, a potentially dangerous one. Not that it will attain consciousness and turn on us, but that we might rely on it in ways we shouldn’t with unexpected, undesirable results. The immediate “danger” to writing instruction is that students will use it to plagiarize. While ChatGPT is subject to about 500-800 word limits on its coherence, a number of consecutive prompts can be used to generate most student length papers that are, of course, easily detectable by a number of services or even just an attentive instructor: ChatGPT writes in an easily identifiable voice.

A more careful student, “Owen Kichizo Terry,” which I assume is a pseudonym for a real undergraduate student at Columbia University (but maybe not), in “I’m a Student. You Have No Idea How Much We’re Using ChatGPT” describes a less detectable (or undetectable) use of ChatGPT: the student provides prompts for outlines and then writes an essay following the outline. It’s unclear to me how much work the student is really saving short of having to come up with an idea of his own. He still has to write the entire paper. Near the end of his essay, he claims he sees a number of students doing the same thing, saying

At any given time, I can look around my classroom and find multiple people doing homework with the help of ChatGPT. We’re not being forced to think anymore.

People worry that ChatGPT might “eventually” start rendering major institutions obsolete. It seems to me that it already has.

There are of course a number of errors in the student’s thinking, but I’d like to say first that he’s a student. If he’s a first year undergraduate, he’s just about old enough to be my grandson. So I’m inclined to give this student a pass; not on cheating, but on having a bunch of wrong, frankly idiotic ideas. We all do when we’re 18. That’s fine.

His first wrong idea is that he doesn’t have to do his own thinking anymore. Filling out the outline with his own version of that content requires him to think. He’s exempt from developing a thesis and supporting ideas in the form of bullet points, but he’s not exempt from thinking, as developing a fully written paper even from a preexisting thesis will inevitably require his own thinking.

It’s also prima facie ridiculous to think that because he can cheat his way through first year writing that Columbia University is now obsolete. He’s being dramatic, of course: he has no idea how rigorous the student learning becomes further up the food chain, or the important research being carried out that he will never see.

Next, he’s mistaken (but probably not alone in this) in thinking that his is a novel form of plagiarism made possible by ChatGPT. You don’t need a computer to commit this kind of plagiarism, just a library. Find an obscure book that hasn’t been checked out in twenty years, outline part of its argument, and then write a paper based on that outline. Preventing this kind of plagiarism is one of the reasons why we have qualified, well-read faculty: at one time we believed there shouldn’t be a paper or book out there that the faculty member hasn’t read if it’s in his or her field, so sharp faculty members would recognize these ideas from their previous reading and catch the student. But you know what the student still has to do with this kind of plagiarism? Read a book, understand it, and then write a paper. Of course an AI generated thesis may not be identifiable from prior reading (or it may), but it’s still essentially the same form of plagiarism, and I also have to wonder how often the AI will repeat itself, and why it shouldn’t.

The student is also mistaken in thinking he’s representative of many students in the country. I’ve spoken to faculty in other fields who are beginning to incorporate ChatGPT in their instruction, and they’re reporting that students seem afraid of the technology. Ivy League students and instructors tend to presume that they represent students across the country: what blessed ignorance. Teach at a community college for a year and get back with me. I don’t think at present use of ChatGPT to cheat in quite this way is widespread. It’s probably more ubiquitous among those who feel privileged, entitled, and under a great deal of pressure to perform at a high level, all of which characterize students at higher level institutions more than at lower level institutions. All students feel pressure to perform at some time: students that I’ve caught plagiarizing often did so for this reason, but the ubiquity of cheating varies greatly by institution.

The student does make some good suggestions for defeating this kind of plagiarism:

If education systems are to continue teaching students how to think, they need to move away from the take-home essay as a means of doing this, and move on to AI-proof assignments like oral exams, in-class writing, or some new style of schoolwork better suited to the world of artificial intelligence.

But he’s mistaken in thinking that we haven’t all already thought about it, or that we aren’t already doing it. Oral exams and in-class writing are already widely used and have been for years. Decades. Literally, centuries. At the doctoral level, these assessments (in the form of qualifying exams and then the dissertation) are often used to gauge the student’s knowledge, to ensure that the student possesses this knowledge him or herself. Can we use them more often? Some instructors certainly could. I certainly could.

The student, being a kid, seems oblivious to the fact that no one will feel inclined to respect his opinions once he’s admitted that he plagiarizes his papers regularly, but he does seem concerned that we do something about it, which is commendable. But, why doesn’t he? What kind of entitlement compels him to cheat just because he can get away with it? Does it gratify him to feel smarter than his teachers by defeating the prompts and breaking the rules? This is all very childish thinking, but then again, we’re dealing with a child — but one, I should say, who already writes very well. He is sadly the ignorant beneficiary of an educational system that has left him, right out of high school, with skills more highly developed than most students in the country. But still, his highly qualified and accomplished college teachers do not need his advice. There’s probably very little that he’s said that they haven’t already considered.

The real tragedy of plagiarism remains unsaid: if writing, reading, and thinking are skills that are only developed through practice, plagiarism is an act by which students rob themselves of the benefit of their education: the knowledge and skills gained, the cognitive development. Students are spending thousands of dollars — tens of thousands of dollars — to deprive themselves of their own education, and in the end they will pay for that loss themselves.

He’s not cheating his teachers, his school, or his parents, just himself, and that is the one thing that he, and all other students, need to know about plagiarism, and something that everyone needs to consider before incorporating any kind of technology in the classroom. Absolutely teach the tech itself if your field demands it. But don’t teach using the tech until you’ve asked some difficult questions first.


Published by James Rovira

Dr. James Rovira is higher education professional with twenty years experience in the field in teaching, administration, and advising roles. He is also an interdisciplinary scholar and writer whose works include fiction, poetry, and scholarship exploring the intersections of literature and philosophy, literature and psychology, literary theory, and music and literature.. His books include Women in Rock, Women in Romanticism (Routledge, 2023); David Bowie and Romanticism (Palgrave Macmillan, 2022); Writing for College and Beyond (a first-year composition textbook (Lulu 2019)); Reading as Democracy in Crisis: Interpretation, Theory, History (Lexington Books 2019); Rock and Romanticism: Blake, Wordsworth, and Rock from Dylan to U2 (Lexington Books, 2018); Rock and Romanticism: Post-Punk, Goth, and Metal as Dark Romanticisms (Palgrave Macmillan, 2018); and Blake and Kierkegaard: Creation and Anxiety (Continuum/Bloomsbury, 2010). See his website at for details.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: