Dan Olinger

"If the Bible is true, then none of our fears are legitimate, none of our frustrations are permanent, and none of our opposition is significant."

Dan Olinger

Chair, Division of Biblical Studies & Theology,

Bob Jones University

home / about / archive 

Subscribe via Email

More Thoughts on AI 

August 26, 2024 by Dan Olinger 2 Comments

No, not some guy named Alan; that’s a capital i, not a lowercase L. 

A while back I wrote a couple of posts about experimenting with ChatGPT to see whether I had a reasonable shot at spotting student work that was using the tool. 

With school starting up this week, I’ve been thinking about what sort of policy to have about student use of AI. My university gives us teachers a lot of freedom as to our course policies; the official institution-wide policy is that student use of AI for assignments is prohibited “without the express permission of the professor”—which means we can give permission for anything we think is appropriate and academically justifiable. 

So I did some more playing around with ChatGPT, and also with Claude.ai. 

I began with ChatGPT, specifying, “Write a 700-word essay in the style of www.danolinger.com on the topic of sanctification.” I wanted to see whether it could write a blog post that sounded like me. (I know what you’re thinking; hold off on any judgment for a bit.) What it wrote—immediately—was pretty good. Although the title didn’t reflect my style here on the blog—you may have noticed that I like titles that start with “On”—it was generally pretty good as to content and basic style. I did notice differences in mechanics; it spelled out the names of the biblical books and used colons rather than periods to separate chapter numbers from verse numbers in references. But it did use the Oxford comma, though it used more commas than I would have in other constructions. 

Sidebar: do you know why I use periods instead of colons in Bible references? Because a colon requires the Shift key, and the period doesn’t. Efficiency. 

I thought I’d see how it handled a secular topic, so I asked, “Make the topic the migration of the monarch butterfly.” I’m no expert on the seasonal peregrinations of lepidoptera, particularly danaus plexippus, but what it churned out seemed very good to me. 

Now, I had asked it to use the style of my blog’s website, where there are, as of this writing, 691 posts, and the speed with which it had responded to both of these prompts made me wonder whether it had even glanced casually in the direction of the blog. So my next prompt was “Delineate the parallels in style between your essay and those on danolinger.com.” 

I was astounded by its reply—seven nicely formatted points identifying seven elements of my blog’s style, ranging from “conversational tone” to “use of relatable examples” to “integration of wonder and mystery.” 

Yikes. ChatGPT is the most attentive reader I’ve got. 

One more test—one that I think might cause it to stumble. “Add biblical references to the essay.” 

Now this is an essay about a non-religious, and more specifically, extrabiblical topic, and I’m asking this non-brain to make biblical applications about something it doesn’t even mention. So what does it do? 

It brings in Psalm 19.1, a key passage for the doctrine of natural revelation. Then it parallels God’s guidance of the monarch with his guidance of his people, citing Proverbs 3.5-6, and then the idea of finishing the race, from Hebrews 12.1-2. And it finishes with Ecclesiastes 3.11 and Psalm 104.24. 

Bravo, ChatGPT, bravo. I’m impressed. 

Later, at the recommendation of a friend, I popped over to Claude.ai and asked for the same thing, for the topic of the early date of the Exodus. Excellent essay. 

Then “Write a 500-word short story set during the Exodus.” 

Pretty good one. The kid’s even named Levi. 

“Write a haiku about it.” 

Waves part, path revealed  
Israelites follow God’s call  
Freedom’s light shines forth 

“Write a sonnet about it.” 

“Write a poem about it in Greek.” 

It even included the line “Οἱ δὲ ἐν φόβῳ ἔφευγον πρὸς τὴν θάλασσαν Ἐρυθράν.” Pronominal use of the article. Nice touch. 

Then I asked for a poem in Akkadian. 

And it did it. 

Transliterated—not in cuneiform—but recognizably Akkadian. (I’m not qualified to evaluate the literary quality or the accuracy of said poem.) 

Yikes. 

These tools are getting better all the time. 

But no, I’ll never use them to write a blog entry. 

So what’s my AI student policy? 

For Greek and for Bible Doctrines, prohibited. I want the students to do the thinking—all of it. 

For my Gospel of John class, I’ve decided to try an experiment. 

“Any use of AI must be specifically credited with quotation marks and a footnote. AI output should not predominate in what you turn in.” 

I have no idea whether or not this is a good idea. We’ll see how it goes, and I’ll ask the students for feedback at the end of the semester. 

No chatbots were harmed in the creation of this blog post. 

Photo by Andy Kelly on Unsplash

Filed Under: Culture Tagged With: artificial intelligence, teaching

A Little Interaction with ChatGPT: Can Teachers Spot Fake Student Work? Part 2

February 9, 2023 by Dan Olinger Leave a Comment

Part 1

In the previous post I presented two questions that I asked ChatGPT, a writing tool based on artificial intelligence, and I included the tool’s response to each question. My interest in trying this was to see if I could spot any evidence that the responses were not written by a student; as a teacher, I want to have some defense against the possibility that a student might use the tool to get out of writing a paper himself.

The first question was “write an essay evaluating Paul’s use of intertextuality in Romans 3.” The second was “evaluate the previous essay for evidence of origination by ChatGPT.”

Here’s my thinking as I read the responses.

The first thing I noticed was how well written it was. The spelling, grammar, and syntax were all nicely polished. The sentences were all grammatically complete. There was no indication that this writer had ever written a text or posted on Twitter (lol). The paragraphs were all coherent. In particular, there were no words that were misspelled but actually spelled other words (e.g. their / there)—that’s evidence of the overwhelmingly common student practice of running the spell checker but not actually proofreading the paper.

Now, I have students who write that well, but they’re in the minority. If my students were to submit something like this, particularly after I’d graded a previous writing assignment, most of them would get caught.

Well, that was easy.

But there are other things to notice as well.

In the first place—and other analysts have noticed this too—the writer doesn’t actually know anything about the topic. The teacher brings expertise to the question and is thus in a position to notice that the tool is just spouting (very nicely) things that he’s imitating from lots of sources; he doesn’t really know what he’s talking about.

As one example, the essay notes correctly that a section of Romans 3 cites passages in the Psalms. But it doesn’t mention that near the end of that section, between two citations from the Psalms, is a string of three citations from Isaiah 59. A human would see that and think, “That’s odd. I wonder why he pops out to Isaiah like that. It’s not like he needs more evidence; this is at the end of a long string of perfectly sufficient evidence from the Psalms.” And, as the standardized process of evaluating intertextuality would prescribe, he would examine the contexts of all those citations to see what’s with the intrusion of Isaiah. And he’d find that all the Psalms passages are addressed to “the wicked” or some synonym, while the Isaiah passage is full of pronouns (they, etc.) that don’t identify specifically who’s (not “whose”) being addressed; and the human would need to trace those pronouns all the way back to the very beginning of chapter 58, where we find that the prophet is describing the depravity of “the house of Jacob.”

Aha! Back in Romans 2, Paul is arguing that both Jews and Gentiles are in need of justification, and he begins chapter 3 comparing the two groups. As he lists passages from the Psalms demonstrating the corruption of “the wicked,” he realizes that he needs to document the pious followers of Moses as well—and he goes to Isaiah, to a passage describing not the idolatrous Northern Kingdom of Israel, but the Southern Kingdom of Judah, the Davidic line.

All that is human thinking. Machines can’t do that. And the teacher who reads his students’ work carefully and thoughtfully, and who knows the ins and outs of the topic that he’s assigned, is in a position to spot that kind of major omission.

I also thought the evaluation (the answer to my second question) was off. Obviously, it missed the whole point I’ve laid out above, as I would expect. But it also criticized the essay for not including personal stories, which would be inappropriate in this academic exercise. And its two uses of the conjunction “however” are illogical; the expected word in each case should be “further,” given that following statements are extending the current point, not contrasting with it.

In short, everything it said was true, but it would raise a teacher’s eyebrows at multiple points. This little sample isn’t sufficient basis for a firm conclusion, but as a teacher I’m encouraged by the experiment.

One more thing: this experiment took place in the context of a conversation with several friends on Facebook, which had some entertaining moments. The complete thread is here, dated 2/4/2023. And a well-deserved word of thanks to my longtime friend Joel Lindstrom, who made it possible—and to Scott Buchanan, who added some enlightening content.

Photo by Andy Kelly on Unsplash

Filed Under: Culture, Ethics Tagged With: artificial intelligence, cheating, teaching

A Little Interaction with ChatGPT: Can Teachers Spot Fake Student Work? Part 1

February 5, 2023 by Dan Olinger Leave a Comment

OK, the subtitle overpromises. I’ve had “a little interaction,” and that doesn’t answer the question with any real certainty. But for what it’s worth, I’ll share what happened.

ChatGPT is an online tool that answers questions, even complex ones, using artificial intelligence (AI). Lots of teachers are concerned that this will hamper their ability to assign writing and maintain academic integrity. Early evaluations are mixed.

Here’s what I did.

I asked the tool to “write an essay evaluating Paul’s use of intertextuality in Romans 3.” I tried to word it to heighten the chance that the essential stupidity of computers would work against the bot. (Yes, they’re wicked fast, which can give the illusion of smarts, but at root they have no intelligence whatsoever, except the artificial kind.) I asked for an essay, and for evaluation; I didn’t call Paul a “saint”; I didn’t mention the Bible, and I didn’t say what “3” meant.

Here’s what it produced:

Intertextuality, the concept of drawing on other texts and literary sources in one’s writing, is a prevalent feature in the New Testament, particularly in the letters of Paul. The apostle’s use of intertextuality can be seen in many of his letters, including Romans 3.

In this chapter, Paul draws on a number of Old Testament texts to make his point about the sinfulness of humanity and the need for righteousness through faith in Jesus Christ. One of the most notable examples of intertextuality in this chapter can be seen in verses 10-18, where Paul quotes from the book of Psalms to emphasize the universality of sin and the need for salvation through faith. By using these Old Testament references, Paul is able to connect his message to the Jewish tradition and to make a clear connection between the Old Testament and the gospel message.

Another way in which Paul uses intertextuality in Romans 3 is through his allusions to other parts of the Old Testament. For example, in verses 21-22, Paul refers to the Old Testament concept of righteousness being given through faith, an idea that is prominent in the book of Habakkuk. By alluding to these other texts, Paul is able to reinforce his message and make a deeper connection with his audience.

In addition to drawing on Old Testament texts, Paul also uses intertextuality to connect his message to the larger narrative of the gospel. For example, in verse 24, Paul refers to the idea of redemption through the blood of Christ, a concept that is central to the gospel message. By referencing this central idea, Paul is able to connect his message in Romans 3 to the larger narrative of salvation through faith in Jesus.

In conclusion, Paul’s use of intertextuality in Romans 3 is both effective and sophisticated. By drawing on Old Testament texts and alluding to other parts of the gospel narrative, Paul is able to reinforce his message and make a deeper connection with his audience. Through his use of intertextuality, Paul shows that his message is rooted in the Jewish tradition and the larger gospel narrative, and that it is intended to bring salvation to all people.

I then made another request: “Evaluate the previous essay for evidence of origination by ChatGPT.” Here’s what it gave me:

The essay demonstrates characteristics consistent with an AI model’s writing. For example, the language is formal and academic, with a clear structure and well-defined arguments. The use of terms such as “intertextuality” and “allusions” shows a familiarity with literary terminology, which is also consistent with an AI model’s training.

However, the essay also lacks a personal touch or a unique perspective that would be expected from a human writer. The language and arguments are polished and well-constructed, but they lack the depth and nuance that can only come from a personal understanding and connection with the topic.

Additionally, the essay relies heavily on facts and terminology, but lacks personal anecdotes or examples to illustrate the points being made. This lack of personalization and emphasis on facts is also consistent with an AI model’s writing style.

In conclusion, the essay shows strong evidence of being generated by an AI model such as ChatGPT. The formal language, well-defined structure, and reliance on facts are all consistent with the writing of an AI model. However, the essay lacks the personal touch and unique perspective that would be expected from a human writer.

These products are impressive, but I think I would spot the fakery if a student submitted them to me. I’ll talk about that in the next post.

Part 2

Photo by Andy Kelly on Unsplash

Filed Under: Culture, Ethics Tagged With: artificial intelligence, cheating, teaching