top of page
Search
Writer's pictureProf Greene

ChatGPT and the Possibilities of Generative Identity

In the most recent episode of his podcast, Ezra Klein interviews Kelsey Piper, a senior writer at Vox with deep connections to the AI arms race currently taking place in Silicon Valley. The episode hits many of the key tropes of ChatGPT discourse circulating across other media (White Collar job automation, changes to our writing practices, policy responses, etc.). But I think the most interesting segment of this podcast was the conversation surrounding ChatGPT and its similarities to human cognition and expression.


In response to the idea that generative AI is simply an “advanced autocomplete,” Piper states:


So it sort of calls into question what intelligence is, right? Like, what is going on in our brains when we think? I think there is a sense in which it’s just a glorified autocorrect, but I think that’s also like a sense in which most of the time, we are also just a glorified autocorrect. Like, what humans do is we read lots of things, we talk to lots of people, and then we put it together into our own sort of framing that articulates something that we wanted to articulate. And it turns out that most of that, you can just feed something the entire internet and then make it really, really good at pattern recognition and then prompt it, and it does that.

For me, the key phrase here is that humans use language as a way to “articulate something that we wanted to articulate,” with wanting being the operative word in that statement. I think this gets at the core distinction between the kind of writing that ChatGPT is capable of generating and the kind of writing that humans produce on a daily basis.


Writing is both an effort to construct (and evidence of a prior) motive that extends beyond the compositional task at hand (writing an op-ed, posting a comment on Twitter, etc.). I’m thinking of motive here in a Burkean sense as an act of writerly identification with/against a particular set of values, perspectives, and opinions associated with a given symbolic order. Because of this, writing is not only goal-directed, or rhetorical, in that it seeks to utilize symbolic means to initiate change, but is also constitutive, meaning that the compositional process (“articulating something that we wanted to articulate”) affords an identity-construction mechanism through which we can better see/understand ourselves and express/share this identity with others.


Rhet/comp scholars in the audience might recognize elements of expressivism in this argument. Expressivist pedagogies emerged within the wider “process” movement in composition studies but with a more specific focus on teaching writing as a self-directed, personal activity through which writers create, manipulate, and express their own sense of identity. Expressivism is still present in various ways in composition pedagogy (freewriting, response essays, etc.) but seems to have been eclipsed through a growing emphasis on teaching academic writing through particular genres and disciplines (Writing across the Curriculum, professional/technical writing, etc.).


So what does this have to do with ChatGPT and generative AI? Well, I think the main takeaway from the public conversations I have seen so far is that generative AI is really really (really) good at producing texts that have clearly defined rhetorical goals (i.e. motives) and a large sample of established genre models. Ironically, something like the personal statement, a common genre taught in professional writing courses, would be an ideal candidate for a generative AI system considering that the goal is clear (present a professional narrative of my accomplishments) and the genre conventions are well-established. Imagine a generative AI system (which may already exist) that can produce personal statements for different audiences and job positions based on your CV. As someone who had to craft over thirty different cover letters when he went on the academic job market, such a program sounds highly appealing.


However, I think writers would be missing out on an important identity-building moment in this scenario. Writing is not only a way through which we represent ourselves through symbolic means, our professional identities in this case, but also a way that we can participate in the very construction of this identity.


But maybe I’m overreacting. So to back up a bit from my humanist manifesto here, this is not to say that “identity building” cannot also be an integral component of interacting with generative AI systems. A recent article I came across on the work of “prompt engineering” demonstrates how writers can still participate in the construction of AI-generated texts in meaningful ways.


Prompt engineers (a somewhat misleading title considering no one knows what’s actually happening behind the scenes) are tasked with getting AI systems to produce the desired results of their query. For example, if you were trying to create a cover letter for a teaching job that emphasized your past experience as camp counselor, a prompt engineer would be better at giving the AI the precise set of instructions to produce the most relevant result with the desired tone, length, focus, examples from your resume, etc. From a rhetorical perspective, prompt engineering sounds like a great strategy for helping writers to “articulate something they wanted to articulate” by collapsing the usual boundaries between the rhetorical goals of the text (the prompt) and the production of the text itself (the generated output).


So maybe the silver lining of generative AI, at least for those of us who teach writing, is that it can encourage writers to become more critical readers of the texts that they produce (or co-produce) and that circulate as representatives on their behalf. Counterintuitively, injecting greater speed and manipulability into the composing process might just lead to slower, more reflective engagement with the end result.


36 views0 comments

Recent Posts

See All

Fooling Audiences with ChatGPT

I’m already sick of reading articles about ChatGPT. I know, that’s a bit hypocritical for someone who’s been blogging about ChatGPT, but...

Comments


bottom of page