Our books

Become a Fan

« Publishing a paper critiquing one's own work? | Main | Norms for checking on the status of conditional acceptances? »



Feed You can follow this conversation by subscribing to the comment feed for this post.

a few ideas

I like oral exams (e.g., discussion of student essays or an important course concept) and multimedia assignments like videos and podcasts that require individualized student presentations. These are not perfect and are hard to assess, but have been generally effective components of my asynchronous online classes in the pre-COVID, COVID, and ChatGPT eras. At the very least, I can tell who is clearly BSing and not engaging with course content at all.


OP here.

Having messed around with ChatGPT myself, I am skeptical of Marcus's claim that "if one makes essay prompts sufficiently detailed, requiring students for example to engage with specific parts or passages of texts, it can be hard to produce a ChatGPT-written essay." With the right prompts and motivation, you can get ChatGPT to do most of the work for you at a C level, at least.

a few ideas: You do oral exams in your asynchronous classes? I guess I assumed that there should be no face-to-face (online or in-person) contact for a class like that, but I suppose I could require orals if I'm flexible about when students can schedule them.

William Peden

Thus far, I have found a few minutes of follow-up questioning to be very effective with presentations. I imagine that the same approach would work for essays. This is quicker than an oral exam, but also quickly detects if the student understands what they have said/written.

It has the added bonus of offering something to students who didn't cheat: an opportunity to reflect on what they have said and discuss some directions for further thinking.


The main thing I've changed is moving to timed evaluations. It doesn't AI-proof things, but I think it helps. This is particularly true, I've found, for essays: I try to replicate the in-class essay by having the LMS give students a random question when they start the essay assignment, and limiting them to 60-90 minutes to write it. Students don't have access to the questions ahead of time. Again, it doesn't AI-proof the assignment, but it does mean that anyone using the AI is less able to effectively disguise it. The questions themselves are prompts that I've run through the AI, to make sure that it struggles to do an adequate job of answering the question (often because it conflates the text we read with another by the same author). I also tend to leave out as much identifying information as possible about the texts the prompt concerns, to maximize the chances that the AI will visibly flub it.

Timed essays with random topics not released beforehand makes students _very_ nervous. They don't like it. On the other hand, my experience has been that they do better on these assignments. Partly, I think it's because they've done more of the reading. Partly, it's because they get to the point faster, and spend less time dawdling on other stuff or getting lost in secondary literature (I forbid the use of secondary literature in lower-level classes, but that doesn't stop them!). It also cuts down on their ability to cheat in other, non-AI-assisted ways.

It's too bad, because I think there's a lot of value in a traditional take-home essay assignment, especially when it comes to learning to craft and present an argument. But it looks to me like the returns on that process, for most of my students, are really quite slim. Their "in-class" essays are worse pieces of writing, but only marginally so. Most of my students would have handed in something very similar anyway, so far as writing and argumentation are concerned. What's improved is the content. And there's so much less CourseHero.

So, you know. It's imperfect, but it's something. Apparently TurnitIn has built some AI detection into their thing, so that might also help for essays. I'll experiment with it in one course when next I teach.

SLAC assistant

I regularly teach one such course, and it's become a real pain. cheating-wise. I get ChatGPT submissions on assignments, papers, final exams. In addition, ChatGPT detectors detect Grammarly, making it virtually impossible to catch students unless they admit it. In addition to ChatGPT, there is a lot of collaboration on quizzes, which is honestly hard to police.


It's time consuming to create these, but for multiple choice questions:

(1) I write brief paragraphs that don't use any key words from text/readings and then I ask "which of the following philosophers is most likely to make an argument like this?" I typically offer at least 6 options but sometimes as many as 12. Sometimes I also include manipulation checks as options (for example, I include "Gordon Shumway" as an option in one question; that was ALF's name) to try and weed out any students who are just guessing because they don't want to read through the paragraph carefully.

(2) I also write questions like "Which of the following best explains what Searle's Chinese room argument is meant to show?" Then I give at least 4 (but sometimes as many as 8) options where they are worded differently in only the slightest possible way (a handy way to set up the options is A. If [X], then [Y] B. If [X], then not [Y] C. If not [X], then [Y] D. If not [X], then not [Y] where "[X]" and "[Y]" are fairly lengthy summaries of relevant parts of the argument).

(3) Another version of (2) is "Which of the following best explains the difference between Term 1 and Term 2?", and then give at least four lengthy options again worded as minimally differently as possible.


Also, Cassie Finley has done some interesting stuff *incorporating* ChatGPT into the curriculum:



I am a graduate student who regularly teaches online asynchronous classes. My students regularly say that my online class has been far more valuable than other online classes they have taken. Here are some of my teaching strategies:

1. I am not a judge, a cop, or a taskmaster. My aim is not to police students to ensure that they do the work at a high level of quality, my job is to facilitate them when they do want to learn. As a result, I think it is a mistake to structure assignments around ChatGPT.

Here's the thing: ChatGPT is not impressive, philosophically speaking. It's not really necessary to get them on plagiarism per se, which is fraught and difficult to prove, and you should instead focus on the conceptual/philosophical problems with the work that they turned in. This will make you way less stressed about the sources of student work and more focused on its quality.

2. Get rid of anything resembling busy work. I have students respond weekly to a discussion forum, but I do not require them to respond to one another and I grade these contributions on completion. I tell them that these should have the degree of formality of talking in class. You are allowed to be uncertain, unsure, test out ideas, etc., that normal assignments do not allow you to do for fear of your grade. Bullshit posts on the discussion forum are to some degree inevitable, but also contagious, because the more people bullshit on them the more others will be prone to bullshitting and therefore not wasting any time.

In general, I just try to emphasize the following: You don't really get a lot of opportunities in your life to slow down and think reflectively about what really ultimately matters. This class can just be bullshit to fill a requirement and get an A or whatever, but it can also be a lot more than that if you want. In my experience, a lot of students are actually eager to engage in a class that is not just the Cold War of instructor technology for catching students versus student cheating technology.

James McGrath

The implications of ChatGPT for asynchronous teaching are the same as for face to face and all other modalities: teach students what it can and cannot do, and create assignments that require students to do things ChatGPT cannot entirely do for them.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Job-market reporting thread

Current Job-Market Discussion Thread

Job ads crowdsourcing thread

Philosophers in Industry Directory