Recent advances in artificial intelligence have led many instructors to reconsider their pedagogical strategies.
Many of us are currently devising syllabi and assignments for academic year 2023-2024, and so I thought it would be useful to build a mechanism for philosophers, if they wish, to share the pedagogical changes that they plan to make in light of AI, along with their rationales.
To share your ideas, click here.
As you will see, I’ve asked for some basic information about the course(s) in which you plan to make these changes, since one’s strategies may vary for different kinds of courses.
Let me try to forestall a possible objection. Someone might think that aggregating and publicizing strategies like this is a bad idea, on the grounds that students who plan to commit academic misconduct might get access to it and thereby gain some advantage. In response, I’d say that a brief, general account of pedagogical changes (e.g. “I plan to have more in-class presentations and oral exams and fewer take-home essay assignments”) won’t help a student to cheat. Moreover, the pedagogical strategies that one intends to use with one’s students will become apparent to them anyway when the semester begins.
If you have feedback about this tool for sharing pedagogical ideas, please feel free to email me at [email protected].
Great idea! For the past few months, we have been discussing this domain in general terms--not limited to philosophy--on our newsletter/blog AutomatED. We have discussed strategies to teach *with* AI, ways to grade better with AI, how well AI detection tools work, how "AI-immune" specific assignments are, etc. We are also launching a collaborative learning community (tomorrow!) for more sharing of ideas, guides, software lists, etc. For those interested, the link is below.
In fact, we provide deep and *public* analyses of assignments' "AI-immunity" as part of a challenge we are running, so if the objection Kraay considers is legitimate, we are much further out of bounds than Kraay. However, it's our view that students are already well-aware of many strategies to use AI to cheat, as evidenced by the many viral TikToks, YouTube videos, etc. on the subject. The cat is waaaay out of the bag. From our experience, it is professors who are in the dark--or in denial--about how easily their assignments can be completed with AI tools, hence one of the goals of our challenge is to make it crystal clear the degree to which they can. In the next few weeks, we will be posting our analysis of a Philosophy assignment that was submitted for our challenge by a professor from Pepperdine.
Posted by: Graham Clay | 08/01/2023 at 03:37 PM
I'm not employed as a teacher, so can't contribute via the Google form. But one fairly simple way of preserving essay assignments in the ChatGPT era seems to be to require that students submit all their notes, quotes, scribbles, etc. alongside the main essay. (Somewhat similarly to how many experimental researchers are required to make their data sets available.) ChatGPT would have a hard time recreating the messiness of these notes documents I think, so students couldn't get around it this way.
Posted by: Thomas | 08/02/2023 at 05:09 AM
@Thomas, what does it mean to ask students to submit 'all their notes, quotes, scribbles'? Most people just sit down at a word processor and write, reviewing and editing as needed. What precisely should such a person be submitting?
Posted by: cecil burrow | 08/02/2023 at 10:41 PM
This post at Daily Nous may be of interest: https://dailynous.com/2023/08/03/policing-is-not-pedagogy-on-the-supposed-threat-of-chatgpt-guest-post/
Posted by: Klaas Kraay | 08/04/2023 at 10:13 AM