Our books

Become a Fan

« Nursing babies and campus visits | Main | Tips for expanding one's professional network for a neurodivergent 2nd-year PhD student? »



Feed You can follow this conversation by subscribing to the comment feed for this post.


Maybe I am just being a grumpy old person, but one thing that gives me pause is the oft-repeated suggestion that using this stuff regularly is going to be the new normal for all sorts of jobs, so we might as well learn how to use it effectively, and teach students how to do that.

I'm just less sure that this will happen, and so it feels more reasonable to (for the next few years at least) continue teaching students to do the things that I have a lot more experience teaching them to do.

trying to figure it out

In one class, I've seen AI used in reading responses in the following ways: first, international students will use AI translators after writing in their native language. Second, some students have copied and pasted AI responses verbatim. And third, some use a combination of AI and their own writing.

I have received little official guidance from my school or department, besides that people have no idea what to do yet. Some people have advised to make AI an intentional part of the class, so that students use AI for certain things, but then hopefully not for other things. Some people just seem to ignore it.

The use of AI detectors has been discouraged, but I find a combination of multiple different detectors useful to get a sense of what students are doing. Sometimes I can tell an assignment was AI generated, but not always. Currently in one class, we have reached out to students who we suspect have used AI, but give them an opportunity to come in and talk about it. Some have just accepted a zero score for an assignment, but others have come in for good conversations.

However, I think it would have been more beneficial to make clearer how AI should be used in the class, and reiterate that often. AI can be used in helpful ways, like perhaps summarizing readings and translation. But using AI to write a response seems less appropriate. And students are often not given clear guidelines on when to use it or not, besides vague statements to not use AI at all.

Clear expectations is what is most needed for students, and following up on those expectations as well. Just because we don't quite know what to do yet is not a reason to not be talking about with students.

Also, I try to write assignments that require more critical thinking or personal response than an AI can provide. I do find the more personal the assignment and the more the students care, the less AI is used. If an assignment is difficult and hard to understand, they seem to turn to AI more.

Assc prof

As a stop gap, I tried nothing but in-class participation, quizzes, and exams in my intro classes this semester. It's been a disaster. Next semester I'm going to experiment with nothing but in-class writing assignments (credit/no credit), and a final exam for those who want a shot at an A.


Assc prof, I am curious to hear more about why it’s been a disaster. I am considering doing the same — nothing but in-class participation, quizzes, and exams. Maybe I should not do it. Thank you!

Just a grad student

Grad student here, teaching philosophy of science. I'm using only in-class assignments (reading responses in class, in-class essays) after a bad online summer class where it was impossible to deal with AI. It's a lot of work for me, but the outcome for the class overall, and I think for the students, is great. Since they are reading everything (they must, or they wouldn't be able to do their reading responses), they have A LOT to talk about in class, and the class is very lively and engaged; they are talking about the material in meaningful ways every day of class. I don't have to "push the river" at all.


Am I the only one who thinks that it is quite easy to pick out the obvious copy-and-paste use of AI? It is almost always written in a style that is pretty clearly non-human. It takes very few firm stances, instead saying things like "critics say" (with no citation) or "the position you might take on this depends on your moral commitments." And, for some of the essay questions I ask, it either (i) answers them in ways that no one writing based off of our class discussion possibly would, (ii) talks about different authors who have the same last name as the author we discussed, or (iii) consistently offers the same incorrect explanation.

Now, I am not confident that I could prove any of this to my university in a way that would meet their academic misconduct review standards. But I am conformable telling students that their work exhibits significant evidence of chatbot/LLM usage and then giving them the option to either re-do the assignment or take an oral exam on their essay. (This is all in my syllabi as well.)

So, far every student has admitted their chatbot use when confronted and then chosen to re-do the assignment.

Assc prof


I teach at a school where the majority of students are not terribly well prepared or motivated. This, combined with the damage that phone addiction is causing to learning abilities, means that the average student really struggles to internalize concepts, even quite basic ones. My quizzes are 5 T/F, 5 MC, and aim just to reward students who are coming and paying attention. The exams are 10 TF, 10 MC, 10 short answer, 2 essays (from a pool). I also post a student guide. Despite a fairly easy format, and my doing my best to make the exams very doable (without being insultingly easy), the average has been a D for both exams and quizzes.


Just a grad student, can you explain how you design in-class reading response assignments and in-class essays? I'd like to try doing these for a metaphysics course.


Just a grad student

In the class I'm currently teaching, we do a "reading response" for the first 25 minutes of class (the class is 90 mins); closed book/closed media. I project a *philosophical* prompt to the class, which is a philosophical question about the material, written in such a way that they must be familiar with the material, but not just recite it. And I have 3 graded essays, simply written in class on select days, also closed book/closed media. After the reading responses, I sometimes start with a bit of lecturing, and sometimes by asking what people wrote. They have a lot to say. My hypothesis is that this is because they are actually doing the reading, and already had to "do the work" in their written response, so sharing after that and jumping in with comments on others' responses is fairly easy/painless.

Daniel Groll

Just a grad student: your in class reading responses sound really interesting! Could you give an example of a reading and a question you've asked about it? I'm also interested to hear how you grade the writing. My worry is that while students often read what I've assigned, they very often don't get crucial bits of it before we talk about it in class. I wouldn't want to penalize them for misunderstandings before we've covered the material in class. The trick, I guess, is to ask a question that will reveal if they did the reading...even if they misunderstood it.

I modified the *Reportatio* assignment that someone posted about Daily Nous over the summer. It worked really well and is basically AI proof.



While Assc prof’s report might just be about intro classes, it might be interesting to put it into conversation with Daniel Groll’s comment. If students “very often don’t get crucial bits” when they read or if they don’t read carefully, as would be manifested in either case by low scores on T/F and MC questions, isn’t it a mistake to just go straight to writing assignments?

Daniel Weltman

This semester I have moved to specifications grading, where a student earns credit for a paper only if it satisfies certain criteria, one of which is that it doesn't sound like it was written by AI. If a paper fails to satisfy the criteria they can rewrite it. This is the first semester I have adopted this grading scheme and the semester isn't over yet, so it's premature to draw any firm conclusions, but so far it seems to be helping quite a bit. Instead of agonizing over a paper that seems AI written (do I grade it down? fail the student for plagiarism? ignore it? should I compare it to the student's other writing? what if it's not AI written?) I can just say "this paper sound soulless, rewrite it and avoid X, Y, and Z so that it sounds like a human wrote it."

Just a grad student

We started the semester with a consideration of the demarcation problem in phil sci, so we read Larry Laudan's 1983, "The Demise of the Demarcation Problem" (among other papers on this, more on that below). Here is the prompt; this example is an easy one relative to some other prompts that came later in the semester.

Laudan cites a number of criteria that have been suggested for demarcating science:

apodictic certainty (demonstrable certainty; Aristotle)
infallibility of results (17th century views)
methodology/the "scientific method" (19th century views)
verifiability (logical positivists; claims are meaningful only if verifiable logically or empirically)
falsifiability (Popper)
well-testedness of claims
characterized by growth or progress
success of surprising predictions
produces useful and reliable knowledge (can be applied, as in technology)

Keeping Laudan's criticisms in mind, which criterion seems better to you than some other(s)? Give reasons, evidence, or examples in support of your answer. (Choose one criterion and compare it to one other, or choose one criterion and compare it to multiple others.)

It's true that as someone said there's a risk with this kind of structure (reading response first, lecture later) that students get penalized for misunderstanding. I've offset this risk by often restating the basic point of the reading IN the prompt (in one sentence or so), particularly with harder readings. I am also not a hard grader; if they've shown engagement and picked out philosophically interesting or important *parts* of the reading, that is good enough as a first pass/for their reading response (to be built upon in lecture). In part, I also simply accept this risk. But also I am not terribly worried about this for the following reason. My basic pedagogical approach has been to get the students to DO some philosophy (in their reading responses, and live, in class) rather than to necessarily fully understand the details of particular arguments/to "absorb" philosophy. I try to support this goal by choosing readings that stand in dialectical opposition to the previous one (so after we read Laudan 1983, we read Mahner 2013, who argued Laudan was wrong); this also helps with engagement. The other thing to say is that I realize that the prompt asks students to answer normatively, and that they didn't get training on how to do this in our phil sci class. I am also fine with this; we start with their intuitions and go from there. (We also discuss the is-ought problem, etc.)

The massive con for this whole approach is the amount of grading. On the plus side, I don't have to put in any work for the class website, and they have no homework besides reading. There are other trade-offs, etc. But students are very engaged, which is harder in a class on non-normative philosophy such as phil sci (at least not directly normative).

Daniel Groll

Thanks Just a grad student! This is very helpful. The grading piece might be a deal breaker for me. I wonder, though, if I might achieve the same effect by having something like 6 of these in the term (our terms are short), but not telling them when they'll be (and then letting them drop the worst one). Lots to think about!


For those of you who do in class assignments, what is your policy for legitimate absences? For example, What if a student has to stay home with a sick child and so misses an in person reading assignment or essay? And how do you handle other types of absence? What do you say if a student misses and important assignment in class but just says “I wasn’t feeling well”?

Assc prof


I give every student two free absences, no questions asked (as well as two days' worth of late waivers). They can use them if they don't have proper documentation. This tends to take care of most cases.

re: absence

Anon, for all those reasons I let a student miss in class work, definitely. For each test, there's a day in the course schedule for makeups, roughly a week after the test in question. Students that missed the test show up and write the makeup, students that don't have a day off.

Just a grad student

I have a generous absence policy: 4 "free" absences, no questions asked. (In addition, students are required to manage and track their own attendance. The baseline is that attendance is mandatory, and given the 4-free-absence policy, absences are neither excused nor unexcused, and they should not contact me about absences. The exception is with chronic problems (e.g. serious, ongoing health problems or the like that can't be accommodated by 4 free absences). This has massively cut down on the number of emails I get negotiating absences, which also helps offset the grading required for my approach. I also do not like positioning myself as their secretary/parent/boss/police. So I have basically refused to monitor their attendance. I have done this in part as a way of reframing my relationship to them.)


This discussion has been very interesting! For those who do a lot of in class assignments (especially writing heavy ones): how do you deal with students who write very slowly or have handwriting related disabilities? We have a testing center that deals with exams. But it’s not feasible to do that for frequent in class responses.

Asst Prof

I require students to do all written work in a Google Doc on a Google drive that I create. Google doc has a ‘version history’ feature which records all the keystrokes made in the document. If students just paste generative AI text, then the Google doc will show this as having been pasted. And human writing looks a particular way because it involves pausing, deleting, arranging. I have found that this has deterred most students from using generative AI. Of course, a student can hand type AI generated text so that it looks like a human wrote it, but before generative AI I probably didn’t catch all instances of plagiarism anyway.


Asst Prof,

Interesting! Can you give more details? For e.g., do you create a separate google doc for each student and then give them the link? If so, how? And are you doing these in class? How do the logistics of this work, exactly?


Ass Prof


This is not my idea. I got it from Dave Sayers here:

But to answer your question: I have each student create a folder with their name on a Google Drive that I host. Each student creates google docs for each assignment in this folder. I don't do this in class. I send them the link to the Google Drive so that they can do this on their own.

Sander Beckers

I had the same idea, and proposed it at my university (in the Netherlands), but it was immediately shot down due to all kinds of regulations. For example, this would mean that we're forcing students to share personal data with Google (and with us), and requiring them to do so with their university login especially creates all kinds of legal trouble.

So to me it's a no-brainer that some company will soon simply offer this functionality but in a clean, privacy-friendly way. Having perfect access to the editing of a document should make it quite easy to compute a very reliable score that says how likely it is some text was written by a human, and all we would need from the software was the finished document and the score, that's it. If anyone knows of existing software that does it, please let me know. If not, share the idea with you tech-savvy friends and they can make a lot of money creating it.

SLAC prof

@Sander I've been getting unsolicited (but non-spammy) emails about something called Rumi that sounds like what you want. Haven't investigated it well enough to know how it handles privacy concerns.

Thanks for all this information, everyone else!

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)

Subscribe to the Cocoon

Job-market reporting thread

Current Job-Market Discussion Thread

Philosophers in Industry Directory


Subscribe to the Cocoon