Thread
My students and I talked about ChatGPT in my first-year writing course this week. Some dispatches from the front lines of teaching in the age of AI: 🧵 1/
Some context: this writing course is specifically themed around teaching and learning in the higher ed classroom. I’m lucky to be able to discuss AI extensively with my students both as part of the course content and in terms of course policy. 2/
For Tuesday, I asked them to read several news articles and blog posts that explain what AI text generators are, how they work, and some of the ethical considerations raised by their use, particularly in the context of the classroom. 3/
They were pretty interested. About half knew about AI text generators already, though none appear to have used them extensively. A couple expressed surprise that we were discussing it, given the opportunities AI presents for cheating. 4/
I asked them how AI text generators might be useful to writers. They observed pretty immediately that AI was really good at writing five-paragraph essays and following specific writing formulas. 5/
I should note here that students were well-prepared for this conversation, having read the first part of @biblioracle’s “Why They Can’t Write: Killing the Five-Paragraph Essay” last week. This is now required reading for all my writing courses. 6/
When I asked what they thought AI’s limitations would be, they observed that it likely wasn’t good at critical or creative thinking. And that if we all started using AI to generate everything we wrote, we would never come up with any new ideas. 7/
After a preliminary discussion, I showed them some text ChatGPT had generated for me on this prompt: “Write a Chronicle of Higher Education op-ed for an audience of college professors arguing that faculty should care about student belonging in the classroom.” 8/
They were impressed with the coherence and organization of the generated text. But they felt the tone was robotic and soulless and that it lacked some of the specifics necessary to make the argument compelling. 9/
One student asked if submitting unedited AI text for an assignment would be plagiarism. I turned the question back to the class. They pretty quickly concluded (on their own) that while it wouldn’t technically be plagiarism, it would still count as academic dishonesty. 10/
Before Thursday’s class, I asked them to play around with ChatGPT or the OpenAI Playground by giving the AI a writing prompt and then evaluating the strengths and weaknesses of its output, and even giving it a grade. 11/
They turned out some fascinating stuff, and I could go on all day about it. Those who asked for form letters or input standard essay prompts on common topics were pretty impressed with what it produced. A-level work. 12/
Those with prompts on niche subjects noted that the AI seemed to lack expertise and produce a lot of fluff rather than substantive ideas. Some caught factual inaccuracies. C work at best. 13/
Those with prompts that asked the AI to write a personal reflection noted its lack of specificity and the generic nature of its narratives. Almost everyone felt that the AI text was repetitive and unoriginal. 14/
Their main takeaways were that AI was useful for formulaic or highly organized texts. Its output was, however, “too perfect” and lacked authenticity. At the same time, the AI dealt imperfectly with highly specialized topics and personal or creative writing. 15/
I then asked them to generate some guidelines for AI use in our class, with learning and academic integrity in mind. They came up with a list of “dos” and “don’ts.” Their main recommendations: 16/
DO use AI to overcome writer’s block, get ideas for your work, create potential outlines or organizations, and proofread your sentences. DON’T take what it says at face value, copy and paste large chunks of AI text into your work, or use AI text without making it your own. 17/
I also asked them about attribution: we agreed that including ChatGPT as an author or citing it as a source both presented problems. Instead, we thought it would be more useful for readers if authors disclosed their use of AI and explained how they used it. 18/
One option for their first paper is to use ChatGPT to generate an argument for a particular audience and then substantially rewrite and revise that argument to make it their own (an idea I stole from @karenraycosta). We’ll see how many choose this assignment and how they do. 19/
My main takeaway: students have pretty nuanced views of AI text generators, their capabilities, and their limitations. Talking frankly with them about how to use AI to support their learning, and how AI might get in the way of their learning, is the way forward. 20/
Will some use it to cheat? Probably. But I’ve had many students who admit to cheating on essays anyway (in mostly undetectable ways). And when you engage them in real conversations about AI use and give them worthwhile assignments, they’re much less likely to do so. 21/
I’m lucky to have learned from @Marc__Watkins, @tnbob, and others on this topic. Let’s keep talking! I’m interested to hear how others are engaging their students in conversations about AI.
This is blowing up! A few other notes: we should be clear with students about the data privacy issues these AI tools present and allow them to opt out if they want. This is something I’m still learning more about, but we did discuss it briefly in class.
Some folks have wondered about the choice not to cite ChatGPT. What we had in mind was writing a short note at the end of submitted writing that indicated where and how ChatGPT was used, rather than a citation on a references page.
Citation presents a problem because AI is neither an author nor a source of information (at least it shouldn’t be) but a writing aid. We thought it would more useful for readers to know how the tool was used in the writing process—information a citation alone doesn’t provide.
Some folks have noted that you can ask ChatGPT to refine its output, and its initial text can be substantially improved according to user specifications. Definitely true! I haven’t asked students to do this, but it would certainly be an interesting exercise.
For those that are worried students could use this function to cheat, even on assignments that ask them to speak to their own experience in a personal tone: that’s certainly a possibility!
But they would have to have quite a bit of writing skill already to know *how* to ask it to improve. And I think that asking them to speak about and reflect on their writing process would help prevent this kind of use.
Anyway, I think it’s important to start from a place of trust and transparency; otherwise we risk ruining our relationship with students as @Marc__Watkins and others have pointed out: marcwatkins.substack.com/p/our-obsession-with-cheating-is-ruining?utm_source=direct&utm_campaign=post...
Finally, some have noted that while ChatGPT currently fails at a lot of writing tasks, it’s likely to get much better in the future. That's crossed my mind as well.
The only way to deal with this, imo, is to be very clear ourselves, and make very clear to students, the value of what they learn in writing-focused courses. And to give students writing tasks that are authentic and worthwhile.
If students aren’t getting it or don’t find the work valuable, then we should work with them to find out why and adjust accordingly. Yes, some students will cheat, but that’s not new. Let’s use this as an opportunity to reevaluate and improve our pedagogy for us and our students.

Mentions
See All