Skip to main content

AI, Writing Pedagogy, and the Offshoring of Education

In a recent article, Kennesaw State University’s Jeanne Law argues that generative artificial intelligence shouldn’t be viewed as a cheating machine, but instead greeted as a technological liberator that frees us from the painful “busywork” of writing—all the planning and researching and drafting and revising that composing requires. Law, who directs the composition program at KSU, argues that by “automating routine cognitive tasks” such as these, students will be free to work on what she calls the “deeper processes” of writing. We should therefore see AI as “a useful tool that enhances, rather than hampers, the writing process.”

I couldn’t disagree more.1

Law is eager to establish that students use AI for “education-related” rather than education-defeating purposes. In a section titled “Helping With the Busywork,” she (rather uncritically) cites a report from the world’s foremost AI tech company to help make that case. The document confidently boasts that “college-aged young adults in the US are embracing ChatGPT, and they’re doing so to learn.” The report also discloses that students overwhelmingly use ChatGPT to produce writing. In fact, “the top five uses for students were writing-centered: starting papers and projects (49%); summarizing long texts (48%); brainstorming creative projects (45%); exploring new topics (44%); and revising writing (44%).” Law recommends that we view such use of AI positively, as data that can “challenge the assumption that students use AI merely to cheat or write entire papers.”

Although Law clearly thinks the wholesale adoption of a computer-generated essay is wrong, she also believes that students are not cheating or undermining their educations when they use an online bot to do things for them like: generate ideas, read, summarize long (presumably boring) texts, be “creative,” engage in inquiry, retrieve information, and revise writing.2 Law thinks that AI is useful because it can free us from such drudgery—the “trivial tasks,” the “busywork” of the writing process. To her mind, when students “leverage” AI to perform this work, they “free up more time to engage in deeper processes and metacognitive behaviors”: the more important work of “honing,” “organizing,” and “refining” ideas and language.

I want to argue that what Law endorses here is not writing, but something else.

There is this odd sense in Law’s reasoning that students already know how to write perfectly well; but now, with AI, they are just liberated from a series of supposedly menial tasks that are beneath them. We might liken this to a renowned chef who now only plates the final result prepared by her sous-chef, adding a final garnish or subtly altering seasonings before serving the meal to her patrons and accepting their praise.3 In the same way that this is not cooking, what Law describes is not writing. The things that Law describes as “busywork” or as “trivial tasks” are not separable from the writing process—they are what writing is, and how it comes to be. The golden age of “metacognition” Law looks forward to is utterly empty and artificial, is it not? This isn’t the student thinking about their own thinking; the thinking is being performed on ideas the student didn’t have, sources they didn’t find, questions they didn’t ponder, words they didn’t choose. There’s no meta in this cognition.

Law’s language here is revealing—all that honing and refining and automating and leveraging makes writing sound like some kind of industrial process in an advanced economy, one where offshoring the dirty work of mining raw materials allows us to realize dramatic reductions in labor costs. We outsource all the mindless, manual work to some faraway place we never have to acknowledge or confront, then all of that invisible labor magically materializes in the form of imported components ready for final assembly in our domestic factory. Then: profit! However, these are not appropriate analogies for understanding the processes of writing and learning; the metaphorical framing Law uses contributes to fundamental misunderstandings about the nature and purpose of the writing process.

I think one explanation for why teachers like Law think the way they do is that they imagine that the point of writing is merely the production of some beautiful content for an audience who requests it, such as a professor or employer. To this mindset, education is a sort of economy where students manufacture educational products and receive compensation in the form of grades and credentials. If education is a kind of transaction, then it makes perfect sense to cut corners during the manufacturing process with an AI tool. But this is an utterly impoverished understanding of the purpose of education and writing. Learning isn’t like manufacturing. In learning, it’s not the product that matters, but the process.

Let me tell you something important that I have learned after 20 years of teaching college writing at some pretty decent schools: student essays suck. Like, hardcore. Even the really good ones suck. And that’s fine, because they are supposed to suck. The point is not that students publish something in the New Yorker at the end of the term, but that they learned to inquire. The finished product doesn’t really matter so much; what matters is that they looked into things, they researched, they asked questions. And not by outsourcing it. They read deeply, intently, critically—with their own unique, sovereign brains. They learned how to find things in the library that would help them with their questions and they took time to understand the importance of arcane processes like peer review. They learned how to critically evaluate source materials. They viewed the whole experience as an attempt to discover and then articulate something that they found meaningful.

During the writing process they encountered ideas they’d never thought of before: some that challenged their own thinking; some that changed their minds completely; some that fairly blew their minds. More than once the arguments and evidence they encountered so radically altered the trajectory of their thinking that they had to completely start over: they began with one idea, but ended up somewhere completely different, someplace totally unexpected. And since they wanted to get it right, they followed their thinking wherever it went, writing and revising along the way.

Sometimes, and this is the most important thing I want to say, the change of mind occurred not because they read someone else’s idea, but because they stumbled onto a new idea in the process of trying to articulate their old one. The writing process generated some idea they couldn’t have imagined beforehand, like some form of magic. As a result, the student had to burn an acre of hard-earned paragraphs they had delicately cultivated over days or weeks.4 And although that experience was painful and difficult and perhaps even a little bit scary, the writers later reflected that their project would have never become what did without it.

At some point in this process, students began to dimly realize that the fundamentally aleatory element of research and writing I am describing here cannot be reproduced by an algorithm designed to select the most statistically likely next word drawn from a large, static database of words that have already been written. “That’s merely the appearance of thought and expression,” they reasoned, “not its substance.” They began to understand that the writing process is fundamentally mysterious, an embodied process of discovery, where it seems that the writing somehow informs the mind, rather than the other way around.

As they drafted, these writers imagined an audience—actual people whose views and values and experiences matter. People we share our world with. People we owe something to. They considered how best to approach these individuals in order to make a good case, because these students understood writing and communication as profoundly social and human activities. They also felt a duty to get things right, to honor the subject they wrote about, to speak truthfully, to contribute in some way to the share of knowledge we have about the world. The point wasn’t to primp and smooth and dress up some bullshit that fit the bill for the assignment, the students believed that what they said actually mattered—to themselves and to others. They cared.

But, above all, these students took this experience as an opportunity to discover something about themselves. In a word, they submitted to a process of becoming: they transcended themselves; they explored their humanity; they reached for something beyond their grasp; they grew. There is no substitute for that. You can’t offshore your thinking, your humanity. You just have to do the work. You have to want to do the work. And if you don’t want to do that, please visit footnote #4.


  1. I think our disagreement is rooted in the fact that Law and I have fundamentally different ideas about the nature of the writing process and the purpose of writing, particularly in an academic context. My suspicions were confirmed when I encountered another of Law’s posts about her efforts to help students use genAI effectively and ethically. The result is her “Rhetorical Prompt Engineering Method” and something she calls the “Ethical Wheel of Prompting.” As a writing teacher, I find these ideas exceedingly grim reading. The “Rhetorical Prompt Engineering Method” essentially shows students how to prompt an AI bot through a series of rhetorical considerations related to their writing task: purpose, audience, tone, genre, style, etc. And the “Ethical Wheel of Prompting” supposedly ensures that students use AI ethically by asking them to reflect on a series of questions as they go through the iterative process with the AI tool. Students ask themselves: “1. Did I read my own input? 2. Did I read the output? 3. Did I edit the output? 4. Did I check the output for: usefulness, relevance, accuracy, harmlessness?” That’s it. That’s ethics now. What are we doing here people? ↩︎

  2. I personally don’t really care about students “cheating” since I don’t grade students in my writing classes—my students grade themselves. But I really care about students who choose not to think or read or inquire and therefore dehumanize themselves. ↩︎

  3. Oh hell, they went and made AI Chef Pro↩︎

  4. You might be saying to yourself right now: “But I don’t want to start over and do all that work and revise and think and struggle and read—that’s so boring and so long and so hard. I just don’t care.” And I say truly and sincerely in response: “Fuck you.” ↩︎