In the previous post I presented two questions that I asked ChatGPT, a writing tool based on artificial intelligence, and I included the tool’s response to each question. My interest in trying this was to see if I could spot any evidence that the responses were not written by a student; as a teacher, I want to have some defense against the possibility that a student might use the tool to get out of writing a paper himself.
The first question was “write an essay evaluating Paul’s use of intertextuality in Romans 3.” The second was “evaluate the previous essay for evidence of origination by ChatGPT.”
Here’s my thinking as I read the responses.
The first thing I noticed was how well written it was. The spelling, grammar, and syntax were all nicely polished. The sentences were all grammatically complete. There was no indication that this writer had ever written a text or posted on Twitter (lol). The paragraphs were all coherent. In particular, there were no words that were misspelled but actually spelled other words (e.g. their / there)—that’s evidence of the overwhelmingly common student practice of running the spell checker but not actually proofreading the paper.
Now, I have students who write that well, but they’re in the minority. If my students were to submit something like this, particularly after I’d graded a previous writing assignment, most of them would get caught.
Well, that was easy.
But there are other things to notice as well.
In the first place—and other analysts have noticed this too—the writer doesn’t actually know anything about the topic. The teacher brings expertise to the question and is thus in a position to notice that the tool is just spouting (very nicely) things that he’s imitating from lots of sources; he doesn’t really know what he’s talking about.
As one example, the essay notes correctly that a section of Romans 3 cites passages in the Psalms. But it doesn’t mention that near the end of that section, between two citations from the Psalms, is a string of three citations from Isaiah 59. A human would see that and think, “That’s odd. I wonder why he pops out to Isaiah like that. It’s not like he needs more evidence; this is at the end of a long string of perfectly sufficient evidence from the Psalms.” And, as the standardized process of evaluating intertextuality would prescribe, he would examine the contexts of all those citations to see what’s with the intrusion of Isaiah. And he’d find that all the Psalms passages are addressed to “the wicked” or some synonym, while the Isaiah passage is full of pronouns (they, etc.) that don’t identify specifically who’s (not “whose”) being addressed; and the human would need to trace those pronouns all the way back to the very beginning of chapter 58, where we find that the prophet is describing the depravity of “the house of Jacob.”
Aha! Back in Romans 2, Paul is arguing that both Jews and Gentiles are in need of justification, and he begins chapter 3 comparing the two groups. As he lists passages from the Psalms demonstrating the corruption of “the wicked,” he realizes that he needs to document the pious followers of Moses as well—and he goes to Isaiah, to a passage describing not the idolatrous Northern Kingdom of Israel, but the Southern Kingdom of Judah, the Davidic line.
All that is human thinking. Machines can’t do that. And the teacher who reads his students’ work carefully and thoughtfully, and who knows the ins and outs of the topic that he’s assigned, is in a position to spot that kind of major omission.
I also thought the evaluation (the answer to my second question) was off. Obviously, it missed the whole point I’ve laid out above, as I would expect. But it also criticized the essay for not including personal stories, which would be inappropriate in this academic exercise. And its two uses of the conjunction “however” are illogical; the expected word in each case should be “further,” given that following statements are extending the current point, not contrasting with it.
In short, everything it said was true, but it would raise a teacher’s eyebrows at multiple points. This little sample isn’t sufficient basis for a firm conclusion, but as a teacher I’m encouraged by the experiment.
One more thing: this experiment took place in the context of a conversation with several friends on Facebook, which had some entertaining moments. The complete thread is here, dated 2/4/2023. And a well-deserved word of thanks to my longtime friend Joel Lindstrom, who made it possible—and to Scott Buchanan, who added some enlightening content.
Photo by Andy Kelly on Unsplash
Leave a reply. Keep it clean.