Dan Olinger

"If the Bible is true, then none of our fears are legitimate, none of our frustrations are permanent, and none of our opposition is significant."

Dan Olinger

 

Retired Bible Professor,

Bob Jones University

home / about / archive 

Subscribe via Email

A Little Interaction with ChatGPT: Can Teachers Spot Fake Student Work? Part 2

February 9, 2023 by Dan Olinger Leave a Comment

Part 1

In the previous post I presented two questions that I asked ChatGPT, a writing tool based on artificial intelligence, and I included the tool’s response to each question. My interest in trying this was to see if I could spot any evidence that the responses were not written by a student; as a teacher, I want to have some defense against the possibility that a student might use the tool to get out of writing a paper himself.

The first question was “write an essay evaluating Paul’s use of intertextuality in Romans 3.” The second was “evaluate the previous essay for evidence of origination by ChatGPT.”

Here’s my thinking as I read the responses.

The first thing I noticed was how well written it was. The spelling, grammar, and syntax were all nicely polished. The sentences were all grammatically complete. There was no indication that this writer had ever written a text or posted on Twitter (lol). The paragraphs were all coherent. In particular, there were no words that were misspelled but actually spelled other words (e.g. their / there)—that’s evidence of the overwhelmingly common student practice of running the spell checker but not actually proofreading the paper.

Now, I have students who write that well, but they’re in the minority. If my students were to submit something like this, particularly after I’d graded a previous writing assignment, most of them would get caught.

Well, that was easy.

But there are other things to notice as well.

In the first place—and other analysts have noticed this too—the writer doesn’t actually know anything about the topic. The teacher brings expertise to the question and is thus in a position to notice that the tool is just spouting (very nicely) things that he’s imitating from lots of sources; he doesn’t really know what he’s talking about.

As one example, the essay notes correctly that a section of Romans 3 cites passages in the Psalms. But it doesn’t mention that near the end of that section, between two citations from the Psalms, is a string of three citations from Isaiah 59. A human would see that and think, “That’s odd. I wonder why he pops out to Isaiah like that. It’s not like he needs more evidence; this is at the end of a long string of perfectly sufficient evidence from the Psalms.” And, as the standardized process of evaluating intertextuality would prescribe, he would examine the contexts of all those citations to see what’s with the intrusion of Isaiah. And he’d find that all the Psalms passages are addressed to “the wicked” or some synonym, while the Isaiah passage is full of pronouns (they, etc.) that don’t identify specifically who’s (not “whose”) being addressed; and the human would need to trace those pronouns all the way back to the very beginning of chapter 58, where we find that the prophet is describing the depravity of “the house of Jacob.”

Aha! Back in Romans 2, Paul is arguing that both Jews and Gentiles are in need of justification, and he begins chapter 3 comparing the two groups. As he lists passages from the Psalms demonstrating the corruption of “the wicked,” he realizes that he needs to document the pious followers of Moses as well—and he goes to Isaiah, to a passage describing not the idolatrous Northern Kingdom of Israel, but the Southern Kingdom of Judah, the Davidic line.

All that is human thinking. Machines can’t do that. And the teacher who reads his students’ work carefully and thoughtfully, and who knows the ins and outs of the topic that he’s assigned, is in a position to spot that kind of major omission.

I also thought the evaluation (the answer to my second question) was off. Obviously, it missed the whole point I’ve laid out above, as I would expect. But it also criticized the essay for not including personal stories, which would be inappropriate in this academic exercise. And its two uses of the conjunction “however” are illogical; the expected word in each case should be “further,” given that following statements are extending the current point, not contrasting with it.

In short, everything it said was true, but it would raise a teacher’s eyebrows at multiple points. This little sample isn’t sufficient basis for a firm conclusion, but as a teacher I’m encouraged by the experiment.

One more thing: this experiment took place in the context of a conversation with several friends on Facebook, which had some entertaining moments. The complete thread is here, dated 2/4/2023. And a well-deserved word of thanks to my longtime friend Joel Lindstrom, who made it possible—and to Scott Buchanan, who added some enlightening content.

Photo by Andy Kelly on Unsplash

Filed Under: Culture, Ethics Tagged With: artificial intelligence, cheating, teaching

A Little Interaction with ChatGPT: Can Teachers Spot Fake Student Work? Part 1

February 5, 2023 by Dan Olinger Leave a Comment

OK, the subtitle overpromises. I’ve had “a little interaction,” and that doesn’t answer the question with any real certainty. But for what it’s worth, I’ll share what happened.

ChatGPT is an online tool that answers questions, even complex ones, using artificial intelligence (AI). Lots of teachers are concerned that this will hamper their ability to assign writing and maintain academic integrity. Early evaluations are mixed.

Here’s what I did.

I asked the tool to “write an essay evaluating Paul’s use of intertextuality in Romans 3.” I tried to word it to heighten the chance that the essential stupidity of computers would work against the bot. (Yes, they’re wicked fast, which can give the illusion of smarts, but at root they have no intelligence whatsoever, except the artificial kind.) I asked for an essay, and for evaluation; I didn’t call Paul a “saint”; I didn’t mention the Bible, and I didn’t say what “3” meant.

Here’s what it produced:

Intertextuality, the concept of drawing on other texts and literary sources in one’s writing, is a prevalent feature in the New Testament, particularly in the letters of Paul. The apostle’s use of intertextuality can be seen in many of his letters, including Romans 3.

In this chapter, Paul draws on a number of Old Testament texts to make his point about the sinfulness of humanity and the need for righteousness through faith in Jesus Christ. One of the most notable examples of intertextuality in this chapter can be seen in verses 10-18, where Paul quotes from the book of Psalms to emphasize the universality of sin and the need for salvation through faith. By using these Old Testament references, Paul is able to connect his message to the Jewish tradition and to make a clear connection between the Old Testament and the gospel message.

Another way in which Paul uses intertextuality in Romans 3 is through his allusions to other parts of the Old Testament. For example, in verses 21-22, Paul refers to the Old Testament concept of righteousness being given through faith, an idea that is prominent in the book of Habakkuk. By alluding to these other texts, Paul is able to reinforce his message and make a deeper connection with his audience.

In addition to drawing on Old Testament texts, Paul also uses intertextuality to connect his message to the larger narrative of the gospel. For example, in verse 24, Paul refers to the idea of redemption through the blood of Christ, a concept that is central to the gospel message. By referencing this central idea, Paul is able to connect his message in Romans 3 to the larger narrative of salvation through faith in Jesus.

In conclusion, Paul’s use of intertextuality in Romans 3 is both effective and sophisticated. By drawing on Old Testament texts and alluding to other parts of the gospel narrative, Paul is able to reinforce his message and make a deeper connection with his audience. Through his use of intertextuality, Paul shows that his message is rooted in the Jewish tradition and the larger gospel narrative, and that it is intended to bring salvation to all people.

I then made another request: “Evaluate the previous essay for evidence of origination by ChatGPT.” Here’s what it gave me:

The essay demonstrates characteristics consistent with an AI model’s writing. For example, the language is formal and academic, with a clear structure and well-defined arguments. The use of terms such as “intertextuality” and “allusions” shows a familiarity with literary terminology, which is also consistent with an AI model’s training.

However, the essay also lacks a personal touch or a unique perspective that would be expected from a human writer. The language and arguments are polished and well-constructed, but they lack the depth and nuance that can only come from a personal understanding and connection with the topic.

Additionally, the essay relies heavily on facts and terminology, but lacks personal anecdotes or examples to illustrate the points being made. This lack of personalization and emphasis on facts is also consistent with an AI model’s writing style.

In conclusion, the essay shows strong evidence of being generated by an AI model such as ChatGPT. The formal language, well-defined structure, and reliance on facts are all consistent with the writing of an AI model. However, the essay lacks the personal touch and unique perspective that would be expected from a human writer.

These products are impressive, but I think I would spot the fakery if a student submitted them to me. I’ll talk about that in the next post.

Part 2

Photo by Andy Kelly on Unsplash

Filed Under: Culture, Ethics Tagged With: artificial intelligence, cheating, teaching