In our most recent "how can we help you?" thread, a reader asks:
What do we make of this: https://www.youtube.com/watch?v=2I3KQjHfZjY
The creator does not direct their concerns towards Academic Philosophy (or any other part of the Humanities), but as someone pursuing their postgraduate/doctorate/postdoc, how seriously should one take the rather strongly made suggestions in the video?
I haven't watched the entire video yet, but basically the guy's angle seems to be that academics who use AI to do research (write rough drafts, etc.) and other tasks (teaching prep?)--and academic institutions that let and/or encourage them to do it--are likely to benefit and outcompete those that don't. Importantly, though, his catchphrase seems to be: "We're not losing skills, we're shifting the skill focus because now certain things are just done for us."
While I suspect he may be right that the tech may receive rapid uptake for "competitive advantage" reasons, I don't think that is likely to benefit people on balance. For, my understanding is that emerging research suggests the above catchphrase is just false: generative AI does a terrible job doing things like summarizing research, and moreover, using it does atrophy skills.
This is why I basically don't use it. I tried using it a couple of times to see how well it could help me do background research--instead of manually searching through Google, Google Scholar, PhilPapers, etc.--and guess what? It was basically useless: I spent more time trying to figure out whether the information and sources it was giving me were accurate than I would have spend just searching for stuff the old-fashioned way. And yes, I also don't want to lose my own ability to think, write, and have an authorial voice of my own. I haven't even bothered to try to use it for anything teaching-related, as that just seems to me like a cheat.
But these are just my thoughts. What are yours?
At least two (chatGPT and Gemini) now have 'deep research' capabilities that (I have heard) are massive improvements in this area specifically. My understanding is that they utilize search more heavily and iteratively on the backend and then synthesize everything with text generation on the frontend. Have you tried those new features?
Posted by: sahpa | 06/26/2025 at 08:32 AM
I make a good-faith effort to make this kind of use of AI every week or so, but like Marcus, think that any gains are more than offset by all the checking and editing that's necessary.
Talking to people in various fields, it seems like there are some actual productivity gains to be had in programming, and maybe other areas too. But I think that most people who are genuinely impressed by AI from a humanities perspective don't have the right standards/training to be making that judgment call.
(Here I'm talking about being impressed by its outputs from a "professional philosophy" standpoint rather than the standpoint of someone who wants to pass an undergraduate course with minimal effort).
The least unpromising use I've explored so far is using LLMs to generate wrong answers to multiple choice questions that I write myself, and that I write the correct answers for myself. This use essentially leans into the LLM's tendency to bullshit, in the technical sense of that term.
But here there are still some losses due to checking, since you need to make sure that the questions and correct answers were actually not changed, and you need to make sure that the "wrong" answers aren't right in some sense/too prone to creating debate. I haven't actually used questions like this in my teaching, it's just something I've given a shot at that I still think isn't really worth it.
Posted by: anon | 06/26/2025 at 10:14 AM
I'm not paying it any mind. I'm pretty productive, and it's not clear to me that increasing that level a bit would do much of anything for me, or even would have done when I was on the market. The limiting factor for me on the market wasn't my list of pubs (which was pretty large for that career stage), but rather _other people giving me a chance_, which they didn't. Similarly, the real limiting factor for me pub-wise isn't having material floating around, it's referees. And no competent referee should be outsourcing their work to AI, period, so that's a limiting factor that should remain in place and, I think, is likely to.
Your mileage may vary, of course, but at the moment I can do everything better than the AI can, and in the end that expertise also translates to time saved. Using it properly requires significant time and effort to craft the right set of prompts, to review the output, to rewrite in one's own voice, and to work with the baseline draft you're provided. I do a better job of that the first time around on my own; moreover, most of my writing takes place in the editing process, and that's a process you still have to do for yourself. I also do a better job of anticipating and steel-manning objections. So in the end, it doesn't seem like using AI helps all that much.
Even for something like translation, which it's really not bad at, it's not really a game-changer (for academic purposes, anyway) because you still have to read the original text, you still have to check the output carefully, then check it again, and you still have to do the time-consuming work yourself, which is adding all the explanatory notes, chasing down citiations (which is especially tough when it's Greek or Latin verse that's been loosely rendered from memory or, worse, when it's half-remembered verses translated into the language you're translating), and writing an introduction (which has to speak to material entirely absent from the AI's training set). Here again, I just do a better job the first time around, and once you factor in the time spent doing quality control, it's not that much less time than if I do it all myself. And when I do it entirely myself, I can rest assured it was done properly.
Posted by: Michel | 06/26/2025 at 10:14 AM
I wonder how AI use may disproportionately affect nonnative speakers in academia. ai can sometimes write better English (in vastly shorter time) than some nonnative speakers, and considering that language is not about the core skills, they may be tempted to use AI in this way, and that use, if regularly enough, would slowly atrophy one's core skills?
Posted by: just a speculation | 06/26/2025 at 02:57 PM
I suffer from depression, which results in frequent procrastination and writer's block. I recently had a section of a paper that I was going to *start* for a week straight. The ideas were fleshed out, as I'd presented several times. It was a necessary but boring part of an argument. I couldn't get myself to write it. I asked Claude to, giving it guidance and asking for just a 300-word sketch that I could beef up and edit. It did its job, and I did mine (to the point where pretty much every word Claude wrote had been replaced). I spent the next few days writing efficiently, then submitted the paper. I don't think I have a problem with this kind of use. But others can chime in.
Posted by: Depressed | 06/26/2025 at 03:28 PM
I have been using ChatGPT to help me when I get stuck writing part of a chapter of my dissertation. I will tell it the problem I have (e.g., I realize now that Premise 2 is worded vaguely, and I have two options, A or B. What are the pros and cons of each one?), and it will provide some suggestions. Almost always, I think it's answers are mostly stupid - BUT usually that forces me to ask "why is this suggestion so wrong?" and somehow that often helps me get unstuck. I find that I just somehow do better in conversation, and over the summer with my advisor less available it has helped me get over some bumps, even though it is kind of like asking one of your encouraging but thinks-they-know-more-than-they-do undergrads to help you.
I also will often ask it to review the writing and phrasing of sections for clarity or any sections that aren't precise enough - and I like its suggestions only about 20% of the time (if that), but that 20% can actually help me see where I could have phrased things a little bit better.
Sometimes it will label my arguments or terms in catchy ways that I like, too, which can be helpful! Though more often than not it replaces terms I use with really philosophically loaded terms that I don't use at all in the paper, which show it really is not yet well-suited to be dealing with the level of precision needed for contemporary analytic philosophy.
I also have seen the research on how using it can atrophy skills which I take very seriously, and after reading this research I have been using ChatGPT even more sparingly even as a purely assistive tool, just because the quality of its assisting is not even close to worth losing any skills.
Posted by: ChatGPT user | 06/26/2025 at 07:26 PM
I honestly figured everyone currently publishing had jumped on the AI bandwagon, but with a different focus.
In my experience, AI is simply incapable of writing a first draft paper from scratch that's of any real quality.
However, what I've used AI for (and which it's proven to be quite useful to me) is using it as an editor and robustness checker.
Now, it's not going to allow you to use methods with which you have no familiarity, because it's going to make mistakes and you have to be able to catch those mistakes.
Also, in my experience, AI is good at editing small-ish portions; though it has a difficult time matching my writing style, so I still had to rewrite the edits, but AI did a good job of showing good edits to make.
In my two most recent papers, again, AI proved invaluable. Not because it wrote things for me, but because I was able to describe my data, the method I wanted to use, and what I was trying to show, and AI helped build that scaffolding. From there, I did most of the detail work / cleaned up errors that AI made and then trusted in the peer review process.
So, I don't use AI to generate ideas or generate papers. I use it as a kind of supercharged beta reader and editor. AI can read a paper in seconds, and usually give pretty decent feedback.
What makes it kind of funny, though, is AI is trained to be "nice," so it usually finds some way to assess your paper positively. To get around this, if you ask AI to be honest or blunt, it just rips the paper to absolute shreds.
It's led to exchanges like this:
AI: "This paper is paradigm-changing"
Me: "Really? Don't just say that for my sake. Feel free to be honest."
AI: "Honestly, after reassessment, the paper has several holes that would recommend against publication."
So, yes, AI is going to be a part of research, but I haven't seen evidence of it being able to write a rough draft as good as I can write one, or flawlessly interpret data. At least not yet. But I also don't use paid subscriptions, so I'm not using the most powerful engines available at a given point in time.
Posted by: ChastenedAuthor | 06/26/2025 at 09:01 PM
I do not use AI and I doubt I will use it in the future. People who use it are welcome to try to replace me. good luck!
Posted by: Daniel Weltman | 06/26/2025 at 11:17 PM
This sort of stuff baffles me. I'm an academic because I like doing research, I like reading papers, I like writing papers, I like editing papers. If I didn't like doing those things, I'd get a job that didn't pay so terribly.
I don't understand why anyone would endure these terrible salaries, if it weren't because they enjoy (most of) the work. Just go get a job in finance instead. At the very least, that would stop me having to listen to people continuously go on about how I "need to embrace AI in my work."
Posted by: baffled | 06/27/2025 at 01:35 AM
@just a speculation, I'm a non-native English speaker. I've started using AI for proofreading purposes and for specific queries when I'm unsure whether an expression I'm about to use is idiomatic / grammatical. I've found it helpful for such things, but I don't see how that would contribute to the atrophy of core skills. Unless you're thinking that non-native speakers might use AI to write papers from scratch?
Posted by: B | 06/27/2025 at 02:49 AM
A brief comment about the linked video: the person making this argument for the indispensability of AI has positioned himself to profit from the use of AI in academia. He has a website offering consulting services for AI startups ("how to make AI that researchers need"), and he has a website offering consulting services to academics ("how to use AI to supercharge your research," etc.). This isn't, of course, to say he's *wrong* about what he's claiming (though independently I think he is), but it's a reason to be wary.
I'm so tired of this dialectic about AI. *If* anyone gets left behind, it'll be me, because I can't bring myself around to using it at all. (No judgment, though, for those who use it in the ways articulated in this thread--these seem like good ideas.)
Posted by: tired of AI | 06/27/2025 at 09:23 AM
Tricking your competitors into wasting their time futzing with automated bullshit generators may be an effective way to get a competitive advantage. It's sleazy, though.
Posted by: Untenured Business Ethicist | 06/27/2025 at 10:57 AM
This is a fascinating discussion. I have written on AI, from the angle of concerns about students cheating (see link:https://onlinelibrary.wiley.com/doi/full/10.1111/edth.70026), but this is a refreshing take. I find the "atrophy of skills" fear to be overstated. Reminds me of the fear of not being able to do math due to calculators, or even the classic concern that writing can be detrimental to doing philosophy. As several posters noted, AI is beneficial once you have that first draft. The better and more polished that first draft is, the more useful AI appears to be for me ("trash in/trash out" as they say in CS). Now, as other commenters have noted, I am a non-native English speaker, so that factor also plays a tremendous role. Still, I use AI for spell checks, grammar, tone, and revisions for everything, and it seems foolish not to. I am extremely transparent with my students about how they can use AI, but also about how I use AI too; their perceptions do matter, as noted in the press recently. I do not use lecture slides, by the way; my stick-figure philosophy sketches, drawn on chalk or dry-erase markers, are infamous. But if I did, I would use AI in some capacity, no doubt. It would just be foolish not to.
Posted by: Henry Lara-Steidel | 06/27/2025 at 11:19 AM
I am with Marcus (and a few others here), on the this one. I think there is a craft side to writing academic work that some people find really enjoyable and rewarding (even if at time frustrating). My identity as an academic is tied to this in a strong way (I am late career). I think AI will not only generate papers and referee reports, but also some new anxieties ... and this will not be good for those who get dependent on it.
Posted by: not quite a luddite | 06/27/2025 at 11:36 AM
A related question for those in this thread who do use LLMs and other generative AI: when do you include that fact in your acknowledgments, if ever?
To put a finer point on it, among the cases below, which would you think authors ought to disclose when submitting material for peer review? (And by parity, would you think an author should disclose the same help if it were given by a colleague? Is there a relevant distinction?)
1. generating an outline for a paper
2. generating whole paragraphs of a paper that are then revised
3. generating translations of a primary source
4. reviewing a complete draft for grammatical errors
5. reviewing a complete draft for stylistic issues
6. reviewing a complete draft for argument strength
7. reviewing a complete translation for accuracy
8. others?
I suppose I'd attribute some form of co-authorship for a human who did 1 through 3, and acknowledge some help for 6 and 7. Depending on the extent, also for 4 and 5.
I also suppose if I found out someone whose work I was peer reviewing did not disclose these (AI or personal help), I would find it troubling enough to run by the editor. But I surmise (worry?) that disclosures like this are not currently happening, and it is not easy to identify the help of AI.
Posted by: Malcolm | 06/27/2025 at 12:48 PM
@just a speculation and @B I am a non-native English speaker/philosopher, and allowing ChatGPT to proofread my papers is always a great temptation. I try to stay away, since I have noticed how using it makes me anxious about my English capabilities. However, one referee once told me that my paper was "too schematic," and I can't help but think that they were dismissing my "pragmatic" use of the English language, and that I should have used AI.
Posted by: M | 06/27/2025 at 01:52 PM
@Malcolm, re item 3: Machine translation is not reliable. Academics should not rely on machine translation when accuracy matters.
This video shows how bad Google Translate is at accurately translating Icelandic.
https://www.youtube.com/shorts/4ClLikHTcAc
Posted by: Untenured Business Ethicist | 06/28/2025 at 09:24 AM
@Untenured Business Ethicist, I'm not endorsing the use of machine translation (or generative AI) but asking people who do use it when they would cite it. (I wrote an appendix to a recent book on Sanskrit warning against the use of Google Translate, for what it's worth.)
Note, though, that Google Translate is not the only machine translation available, and academics are involved in developing tools explicitly advertised as for scholarly research, e.g. Dharmamitra. But again, I'm not endorsing even the use of this tool, though I'm watching its development with interest.
Posted by: Malcolm | 06/28/2025 at 11:17 AM
@Untenured business ethicist: why think Icelandic is representative? It is spoken by only 300K people in one island nation.
Posted by: sahpa | 06/28/2025 at 11:31 AM
@sahpa: The example of Icelandic is significant because it shows how little Google cares about the reliability of a translation product it has chosen to make available.
@Malcolm: Do the tools advertised for scholarly research work as advertised? If not, no one should be using them, so the question whether to cite them should not arise.
Posted by: Untenured Business Ethicist | 06/28/2025 at 01:41 PM
@Untenured Business Ethicist - I'm not sure I follow. Just because someone shouldn't be using some tool doesn't mean that they won't. And if they do, there may still be an obligation to cite.
Maybe I shouldn't ask my colleague Dr. X to help me write my paper because they are not reliably good at philosophy. But, not knowing this (or believing they are) I do ask them. Depending on the way they helped me, it's still possible I should cite them as a coauthor or acknowledge their help in footnotes.
I'm interested in knowing, from people who *actually do* use the tools mentioned this thread (whether or not they *should* use them), if they do cite them, and if so, under what circumstances they cite them.
That's all I'm after.
Posted by: Malcolm | 06/28/2025 at 07:25 PM
@Malcolm,
You are more generous with giving acknowledgements than people whose papers I've read and commented on :-)
Posted by: ChastenedAuthor | 06/28/2025 at 11:02 PM
My suspicion is that those of thinking about how AI will enable us to more easily produce the sort of thing we are currently producing by hand will be left in the dust just as much as those of us who refuse to use AI. AI is going to enable new sorts of products, just as logarithm tables did. And once those new research products are around, I suspect producing a cute little paper (of the sort I love to produce) will be a bit antiquated and not nearly as useful as what is to come.
Do I know what the new product is? No. But if the history of tech is any suggestion, there are smart tinkerers out there, and some of them will have some exciting breakthroughs. (Breakthroughs which may and likely will have huge downsides.) And all of the debate we are having will be for the historians.
Posted by: His suit bespoke | 06/29/2025 at 12:23 AM
@Malcom I am not sure the analogy with acknowledging people that commented or gave feedback on our work works with AI as it exists today (LLM’s). At its best, it can help the way computers have helped Chess Grandmasters for a while: suggesting moves, honing in skills, etc. That is, I think generative AI can help you write and do philosophy better, but any gains and advances made are due to you, just as you learned and got better at doing philosophy, first through your academic training, then through continuing refinement of those skills as you write, then by teaching, writing, and doing philosophy.
Posted by: Henry Lara-Steidel | 06/29/2025 at 10:32 AM
@Malcolm: The question you asked in your most recent post is about when people who use LLMs *do* cite their LLM use. The question you initially asked is about when people who use LLMs *should* cite their LLM use. These are different questions, and they have different answers. LLM users are not known for their integrity or good judgment.
Another question: what undisclosed uses of "AI" would be grounds for retraction if discovered after an article is published? I would think undisclosed use of machine translation of primary sources would be grounds for retraction if the algorithm introduces errors that significantly affect the content of the paper.
Posted by: Untenured Business Ethicist | 06/29/2025 at 11:06 AM
@Untenured Actually, I asked both questions in the original post:
"A related question for those in this thread who do use LLMs and other generative AI: when do you include that fact in your acknowledgments, if ever?
To put a finer point on it, among the cases below, which would you think authors ought to disclose when submitting material for peer review?"
I'm interested in what people actually do and in their thoughts about what they think they ought to do. My recent response to you took your claim as being about the second of my questions (the ought), since you said people *shouldn't* be using AI translation. But if your claim is that since AI translation doesn't work properly, people won't actually be using it, so the question of what they actually *do* doesn't arise, I think that's also suspect (people make poor judgments, as you say).
I don't want to nitpick over this any longer (I think it's clear as you say that there are two questions here), as I'd like to step back and hear what people who do use the tools have to say.
For instance, I'm curious if people agree with @Henry Lara-Steidel's take on the analogy with help from humans who perform the same tasks.
And I am curious if you or others would think differently about retraction in the case you describe even if the translation were accurate. Personally, I would also at least seek editorial advice if I discovered translations were purely AI-generated in a paper, since typically translations published in the area I work in are presented as the work of the author or are cited as the work of someone else.
Posted by: Malcolm | 06/29/2025 at 07:44 PM
Malcolm, you raise interesting questions about artificially translated texts. One take here is that for one to genuinely author something, or for it to be that person's own work, they need substantially exercise their own skill and judgment in the production of the work. Maybe this is what should drive us here, rather than accuracy alone. Taking this view for granted, we can generate a spectrum of cases starting from no skill and judgment : e.g., ask the LLM to translate a page, copy/paste it into the article and keep going. Here the authorship/ownership claim to the translation seems false.
But then on the other end of the spectrum we can imagine cases that required a significant exercise of skill and judgment. e.g., for the website you linked, giving it a page of text and then checking the classification of various words one by one, checking various "decisions" about sentence structure, etc., and then making a number of edits as seems appropriate. We could further imagine that the majority of the original "decisions" have been revised by the end of the process. Here I'm more inclined to think the authorship/ownership claim seems true.
(But are the people with the skills to do this better off translating it on their own to begin with? I'm inclined to say "yes".)
Sometimes I see writers providing their own translations noting that they "consulted" translations of the same text by X, Y, Z in the production of their own translation. I'm inclined to think a similar "consultation" note would be appropriate for someone who heavily modifies AI outputs.
Posted by: anon | 06/30/2025 at 07:46 AM
Do any of the posters who are fixated on people who use LLMs citing them want to even attempt an argument for why they should be cited by those who use them?
Posted by: Cap | 06/30/2025 at 08:18 AM
Cap
Philosophers do not work in a vaccuum. In fact, the norms of scholarship in the sciences are driving publication norms across the academy to a large extent. See the following, from Springer/Nature:
"Use of an LLM should be properly documented in the Methods section (and if a Methods section is not available, in a suitable alternative part) of the manuscript. The use of an LLM (or other AI-tool) for “AI assisted copy editing” purposes does not need to be declared. In this context, we define the term "AI assisted copy editing" as AI-assisted improvements to human-generated texts for readability and style, and to ensure that the texts are free of errors in grammar, spelling, punctuation and tone. These AI-assisted improvements may include wording and formatting changes to the texts, but do not include generative editorial work and autonomous content creation. In all cases, there must be human accountability for the final version of the text and agreement from the authors that the edits reflect their original work."
https://www.nature.com/nature-portfolio/editorial-policies/ai
Posted by: Brad | 06/30/2025 at 08:53 AM