by Rachel Azima, Writing Center Director and Associate Professor of Practice, English
To begin, a few words about what this is not: a wholesale condemnation of generative AI or an exhortation to avoid it at all costs. There are numerous ways in which we can and do harness AI tools to our individual and collective benefit: examining large datasets that are beyond a human’s ability to take in efficiently or comprehensively; accessibility tools such as automatic captions created by AI; the list goes on. But I/we want to encourage caution around employing generative AI tools for tasks that humans are best at: processes that involve creativity and critical thinking. Writing is one of those tasks.
The voices proclaiming the benefits of genAI are loud and numerous. But trade-offs are inevitable with any new technology, and our students need to understand the downsides and costs of turning over their cognitive processes to these automated tools. It is incumbent upon us as educators to consider what generative AI and LLMs in particular aren’t (yet) good at and to teach our students critical digital literacies around AI. And as writing and learning change around us, it’s well worth taking the time to reflect upon our own values so we can instantiate them in every aspect of our work lives ourselves.
I (Rachel) would like to point out that I don’t speak for all writing center directors, nor writing studies scholars more broadly, though some are equally critical of genAI for writing as I am. But I am writing out of a place of deep concern. Universities run on writing: virtually all of us use it to disseminate knowledge, irrespective of our field of study, and nearly every discipline asks students to demonstrate their learning in writing at some point in their educational careers. Consequently, we all must think deeply about how these new technologies will figure into our instruction around that writing. My role puts me into contact with writing from all across the university, and while there are obvious differences in conventions and expectations in different fields, there are a number of commonalities that I would encourage instructors to keep in the fronts of their minds when assigning and evaluating writing in our ever-changing technological landscape.
- Writing is a process.
This statement likely seems incredibly obvious, but it is crucial to emphasize this fact to students and to foreground it when designing and implementing writing assignments in our courses. It has long been a truism in writing center studies that “our job is to produce better writers, not better writing”—that is, we maintain a focus on process over product; on the learning that takes place while revising (and over the long term) rather than a single isolated piece of writing (North, 1984). While the clarity and effectiveness of a given piece of writing certainly matters, the journey is as important as the destination.
So one thing we can do is emphasize to students that the writing process is a process, and if you don’t engage in the (whole) process, you’re not actually writing. And that means from initial ideation down to crafting the sentences that communicate those ideas. If what we hope for is for students to develop certain habits of mind—ways of engaging in inquiry and performing analyses and communicating ideas that are appropriate to our fields—the onus is on us to demonstrate what they will and won’t learn, depending on how they go about completing their assignments and how much of that process they complete themselves.
- Writing is a tool for discovery.
This is so often news to students! The idea that they might write their way into their insights is something they may not yet have internalized by the time we see them in our classrooms. But writing is indeed a tool for discovery, not merely a means of reporting information or ideas that are already known to the writer. Incorporating a drafting process into assigned writing—that is, asking for rough drafts and evidence of the early stages of their thinking—can be helpful not only to normalize the sharing of works in progress but also to help alleviate concerns about whether students are doing the writing themselves. Incorporating multiple drafts and revisions also helps combat the notion that writing should be strictly goal-oriented, in the sense of producing enough words to meet a minimum and calling it done. Students frequently need reminding that writing may well mean discarding ideas or interpretations that don’t work. Deleting and reframing what one has already written is not just okay; they are positive parts of the process that get us to stronger, more defensible end points. If we, as instructors, discuss how we work through ideas ourselves—that sometimes we don’t know what we think until we try writing it down; that doing so is, in and of itself, a key step in the process—we can help students reach a more nuanced and productive understanding of what writing means and what it can do. No machine learning tool can take the place of this exploration. GenAI tools are more likely to foreclose possibilities, confidently presenting one option as the only option. And if students don’t wrestle with their own ideas from the get-go, it becomes far more difficult to recognize such limitations when a text is presented as definitive.
- Writing is hard.
This is not a comfortable truth for many, but it is about as close to a universal truth as we can get (barring the presence of divine inspiration). Some folks find writing easier than others, it’s true. But for many, putting thoughts into writing is difficult. Simply starting the process is a challenge for many people, including myself (Rachel). How tempting, then, to go with whatever makes it easier! But figuring out what to do with a blank page is part of the important cognitive work and development that happens when we write. We have to help students understand that this struggle is one of the places where they learn. That it’s okay for things to be hard; that it’s a good thing, not a bad thing, to slow some processes down. We don’t need to bend to capitalist logics that dictate that efficiency and productivity must be the values we place above all others. Learning can’t be rushed. And if writing is a tool for learning, it’s worth students’ time to slow this down, too.
And, as I will reiterate below, there’s no need to engage in this struggle alone. Sitting down to brainstorm with a Writing Center consultant means students need not face that blinking cursor solo. And writing alongside a consultant who can ask prompting questions, build a writer’s confidence, or merely write beside them in solidarity provides intellectual and emotional supports that are uniquely human in an increasingly digital world.
So, if we accept these basic principles around writing, what else can we do to implement writing in courses effectively, given the array of technological tools at students’ disposal? Here are a few ideas.
- Engage candidly and openly with AI tools in our courses.
All of us will have different understandings of the relative usefulness or harmfulness of AI tools in our disciplines. But as far as using genAI tools for writing (and image generation) go, not all students know how readily they replicate the biases built into the datasets used to train the models. We can talk directly about these, and how, if students do use these tools, they need to be on the lookout for these biases as well as the ways genAI tools tend to confidently present singular answers to complex problems. It’s not only students who are using these tools as search engines, to the misinformation of all.
Furthermore, it’s important to help students understand that “Intelligence” is a misnomer in the name. AI tools can’t think; they can’t reason. They have no ethics. All genAI can do is regurgitate existing language in a probabilistic way that a) isn’t thought, and b) is nothing new. But we can tap quite easily into students’ own desire to innovate. Many are creative writers who see value in expressing their thoughts and feelings in the written word; even if they aren’t, or if the discipline they’re writing into seems far removed from such creative ventures, most students hope to contribute something new to the world. Helping them recognize themselves—their own human brains—as an irreplaceable source of creativity and innovation can help motivate them to engage in the intellectual work we’re asking for.
- Rethink assignment design.
Realistically, some assignment types simply no longer hold the value they used to. Discussion boards posts, for instance, are incredibly tempting for students to offload to technological tools. And there are strong temptations for instructors as well as students to take advantage of time-saving measures, including the siren call of tools that evaluate student writing for you. But I/we would urge anyone reading this not to assign writing you don’t want to read yourself. Why should a student be invested in the writing, if we aren’t? We don’t have to keep assigning the same kind of writing just because it’s what we experienced, unless it’s a genre we know they’ll have to employ later in their working lives. And it’s worth finding ways students can feel genuinely invested in their assignments. It’s true that some disciplines are highly constrained by conventions and expectations for what kinds of information gets shared and how. But in others, there’s room for students to make connections to their own lives. If they care about a topic, if they can see the relevance to their own experiences and future, they’re more likely to want to put time into doing the writing themselves.
Moreover: it is imperative to revisit our disciplinary and pedagogical values, and what, specifically, we want students to learn. What do we need students to be able to demonstrate they can do? How is writing the best way to do this? When it is, how can we shape the assignment to be both useful and approachable? The next, crucial step is to make these purposes crystal clear to our students. Why are we asking them to do this kind of writing? What learning does it demonstrate? What does it accomplish? If students understand why they’re doing something—what they gain from going through the process themselves—they’re far more likely to put the effort into doing the kind of work that a) rewards our own time spent reading it, and b) helps them learn to think in the analytical and/or creative ways we hope they will.
- Rethink evaluation.
Refocusing on our core values—what, ultimately, we hope students take away from our courses—can thus help guide assignment design. But it is also on us as instructors to think very carefully about what values around writing we’re communicating to students, and again, what our end goals are for their learning, when we evaluate their written work. If students conclude that an instructor prioritizes clean, flawless prose over the ideas they’re trying to express, that creates a huge temptation to rely excessively on genAI tools to produce writing that meets that standard. Undoubtedly, clear and effective writing is valuable. At the Writing Center, we’re here to help writers get there (over time). But sentence-level issues affecting clarity can always be dealt with later in the process (and indeed, it’s the more superficial aspects of writing that machines are good at addressing). But personally, as an instructor, I would rather see messy prose that reveals that the writer has wrestled with complex, nuanced ideas, as opposed to something that looks tidy but does not actually reflect any substantive thought.
For those who don’t want students to use genAI tools for writing, it can be tempting to turn to (purported) AI detectors. But at the end of the day, operating punitively isn’t the answer. There’s no foolproof way to discern if a student has employed AI tools in a way we might consider inappropriate; false positives are incredibly common, and detectors tend to flag prose written by non-native speakers of English more often. More to the point, it creates an unhelpful atmosphere of mistrust in our classrooms.
Even for instructors who hope students will make minimal or no use of genAI tools as they write, it’s a topic that needs to be addressed directly: we can’t ignore the ubiquity of generative AI in our everyday lives. Some of us will embrace it; some will resist. But it’s worth considering that “well, it’s here to stay; we have to teach students how to use it” isn’t the only principled or reasonable position to take. Certainly, we can teach students how to use these tools to obtain their desired results—or, we can continue to teach students how to engage directly in activities that promote their learning. I was talking with a research collaborator about how one of her colleagues spent a class period teaching students how to generate prompts that would solicit helpful feedback on writing, in preparation for a peer review session. But one could also spend that class time having students learn by doing as they engage in conversation around their writing. Even if said conversation is imperfect, students will arguably learn more about writing, oral communication, and interpersonal skills by engaging in dialogue with one another, rather than by practicing how to have technology mediate this process. If for no other reason, the environmental costs (which are not evenly distributed) need to be factored in when considering the costs and benefits of infusing AI into our curricula.
And research is beginning to show the downsides of cognitive offloading, as well as correlations between higher AI usage and decreases in critical thinking. That second study found that age and higher education levels correlated to more critical thinking/less dependence on AI. So in some ways, it is more crucial than ever to keep doing what we have always done in higher education, albeit in new and better ways. Students must leave our institutions with an ability to discern between facts and mis/disinformation, with the knowledge of how to ask probing questions and interrogate the world around them, with the ability to innovate solutions to the many challenges in front of us. Our engagement as individuals and teachers with emerging technologies can and will vary, but holding fast to our core values around education in a world marked by uncertainty won’t steer us wrong.
And I’d be remiss if I didn’t stress that at the Writing Center, we’re here to support all of it: instructors as they design and implement writing assignments (including through the Writing Fellows Program, in collaboration with the CTT); students at every stage of the writing process, including the difficult stage of starting a piece of writing. Students at every level can come in with an assignment, and we can ask probing questions to help them brainstorm. They can write alongside us during accountability appointments and have support for generating and organizing ideas as well as polishing how they articulate them. We can build writer’s confidence as we engage in dialogue with them—human interactions that can’t be replaced by machines. We are delighted to bring our expertise around the writing process to your own efforts to assign meaningful writing in your courses. All of us are facing myriad challenges right now, inside and outside the classroom. When it comes to writing, we at the Writing Center are here to provide solidarity and encouragement as well as concrete support around every step of the process.
References
North, S. M. (1984). The idea of a writing center. College English, 46(5), 433-446.
Gerlich, M. (2025). AI tools in society: Impacts on cognitive offloading and the future of critical thinking. Societies, 15(1), 6. https://doi.org/10.3390/soc15010006
Liang, W., Yuksekgonul, M., Mao, Y., Wu, E., & Zou, J. (2023). GPT detectors are biased against non-native English writers. Patterns 4(7). https://doi.org/10.1016/j.patter.2023.100779
Lopatto, E. (2024, December 5). Stop using generative AI as a search engine. The Verge.
https://www.theverge.com/2024/12/5/24313222/chatgpt-pardon-biden-bush-esquire
Sano-Franchini, J., McIntyre, M., & Fernandes, M. (2024). Refusing genAI in writing studies: A quickstart guide. Refusing Generative AI in Writing Studies. https://refusinggenai.wordpress.com/
Zhai, C., Wibowo, S., & Li, L. D. (2024). The effects of over-reliance on AI dialogue systems on students' cognitive abilities: a systematic review. Smart Learning Environments, 11(1), 28. https://link.springer.com/article/10.1186/s40561-024-00316-7