Home » I am no longer chairing defenses or joining committees where students use generative AI for their writing

I am no longer chairing defenses or joining committees where students use generative AI for their writing

This post is by Lizzie. The photo is from Mount Rainier. 

I decided this week on some new rules for myself relating to graduate student training.

  • I will only chair defenses where the the student states they did not use generative AI at all in the writing of their thesis.
  • I will only join committees for graduate students when the students agree not to use generative AI at all or in limited (pre-defined) situations for their writing.

Why am I doing this? Because I have limited hours in my days, weeks and life and thus limited hours to dedicate to graduate student training and I want that time to be used most effectively in training folks. Time I spend reading AI-generated text — and possibly editing it for students — is not currently a good use of this time.

Why am I doing this now? Because it’s been bubbling up for a while but very recently it was like someone threw a small grenade in the pot and that got my attention. Meetings with an old friend and colleague pushed me and, as though the universe wanted to make sure that I got the message, I walked out of his house and arrived at the airport where I checked my email to prepare for a PhD thesis defense that I was chairing the next day and found the thesis was written in part with generative AI.

I personally never would have noticed this as this reality was tucked into one sentence in the preface, but the outside examiner’s report flagged that entire paragraphs appeared written by generative AI (chatGPT or similar). As chair a good bit of my job is overseeing how the report by the outside examiner is treated and considered (for those of you not familiar with the role of chair, you’re in the same boat as me when I started a faculty job here in Canada, you can see what the chair is supposed to do here) and so I scrambled to figure out the official UBC rules. They’re here.

I had already agreed to chair this defense, which was starting in about 16 hours so I felt that I could not back out, but I did want to get a sense of whether these rules were followed. I wanted to know how the student used generative AI and whether they knew when to use quotation marks around text from it versus just take it and run (this is what I understand of UBC guidelines, and it honestly makes sense to me, but is a debate), and when and how they discussed this with their supervisor and supervisory committee. Had they asked their supervisor or supervisory committee to read and edit AI-generated and AI-edited text without telling them it was AI-generated and/or AI-edited text? I suddenly realized that didn’t really seem okay to me.

My take-away from the whole thing that followed? We are in a deep mess that an alarming number of my colleagues would be happy to pretend is not happening and we have left students adrift at sea (which, I realized cycling home from the defense, rhymes with chatGPT). I feel fairly sure many of my colleagues would have been happier if I had not asked these questions and I was pretty horrified by some of the discussions there and in days since with colleagues. My small survey through conversations suggests to me that most people are fine to have their students use generative AI to edit their text, maybe help them write it etc.. One said, ‘I am so relieved I don’t have to edit my students’ writing so much anymore.’ I guess this means most people think it’s fine to ask their colleagues to review AI-generated text produced by their students? There was also a lot of throwing their hands up and saying, ‘oh, but there are no good guidelines, so what can we do?’

But reading the UBC policies and thinking about this a little made me easily disagree — I can come up with some guidelines, just like I can come up for guidelines for old-school plagiarism. And just like old-school plagiarism (before the era of Turnitin and such) I can never know if students follow those guidelines. But I can make it clear that not following them is academic misconduct to me and let students decide if they want to do that.

So here’s how it works if you’re a student and you want me to join your supervisory committee:

  • I’d prefer you not use generative AI to edit your writing at all (and I would rather read text with some grammatical errors).
  • I understand that you may want to use it to edit your English grammar. If you want to do this using generative AI you must promise me you will not use generative AI beyond this and show me that you know the difference. Thus, you need to:
    1) make up a short (say, 1-3 pages) document showing examples of both how you use AI to edit your writing *and* examples where using it without quotation marks would be considered plagiarism. Maybe also include examples where it is editing your language and not your grammar, and confirm you will not do this.
    2) You also need to find a way to share or record all of the changes made so you can document them.
  • These guidelines apply to your writing of text, not your writing of code. (Update from 19 July 2025: I see coding with generative AI as a different ball of wax. I am not asked to review your code line by line generally and it can be tested in ways your writing cannot, but this does not mean I am totally fine with generative AI for coding and not for writing).
  • Update (19 July 2025): These guidelines are not necessarily set in stone for the rest of my career. They are the ones I see as best for now.

What does this mean for my own students and trainees? I don’t want them to use generative AI for text I will help them with. This is their chance to have me edit their grammar and flow and everything and I think they should use it fully to learn it as well as possible. If they want to write someone else an email using Grammerly or leave the lab and do whatever, fine — but while I am here to help, I want the help I give not to be editing generative AI text.

I would hope that other supervisors can offer this also and thus, when I do read text that is not edited by generative AI, it won’t have so many grammatical errors. I don’t think that, as supervisors, we should all feel so relieved to stop editing our students papers and other writing. We do realize that all that time we’re spending will now be reading AI-generated and/or edited text, right? (Not to mention, all that text we’re working on is thus being rapidly fed into the models owned by private companies.) That said, I’ll be chatting with my lab about this in the coming weeks and am open to being swayed but the argument has got to be good.

On that note, I’ll address a few points I have heard more than once.

  • This is unfair to non-native speakers of English. The scientific publishing world (and conferences etc.) have elevated English and that is unfair in many ways. But I don’t see this as good fix to it at all. Depriving non-native speakers of the opportunity to have me help them with their writing in English does not seem a great outcome. No one in my lab currently is a native English speaker and I am fine editing their text — even if it takes slightly longer (though honestly I don’t think it does, because editing grammar is so quick compared to really teaching someone to write well), but I will be looking to hear what they think.
  • Asking people to quote from chatGPT or similar is insane, no one will do that. Yes, got it. Now ask yourself why you’re okay hiding that you’re doing it by not quoting it. If it’s not your writing, why did you give it to me in such a way that I cannot tell that it’s not your writing?
  • The world is changing, get with it. Sure, but this is about me spending the time I have to train and help others by reading/editing AI-text and that’s not a good change. If we want AI-generated text to be acceptable then I think we need a lot of other changes too, such as that we write A LOT LESS. I know this has been discussed before and if we come to where papers are just methods/results and published (totally open) data with some AI-generated/edited text then maybe I will start reading the bits of text again, but asking me to do it now across a 100-200 pages thesis is not a good use of my time and that means poorer training for students.

These rules flow to co-authorship and I expect I have already agreed to co-author work with other people’s students where they are writing with AI assistance, so I will have to figure out how to deal with that. I feel especially bad it has taken me over two years to come up with these rules and guidelines so students know what I want and know why I am asking for it.

I think we have really failed students in not sorting this out for them and I would be very interested to hear if you and/or your school have guidelines for thesis students (MSc and PhD). I have found pretty scant info or real useful rules.

It seems we’re all just going along and I am pretty worried about what I have heard from colleagues while discussing this. One of them, seeming exasperated at me when I said that lifting a sentence or most of a sentence from generative AI needed to be in quotation marks, ‘This is silly. How is this different than an advisor just editing text for a student?!’ To which I said, ‘because a supervisor can be a coauthor on a paper and, according to UBC guidelines, chatGPT cannot be.’ This left a long silence from which we never really recovered.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *