Pure Techno-optimism, or What the Hell are We Even Doing?
- Prof Greene
- May 15
- 5 min read
What is the role of generative AI in the future of education? This seems to be the central question facing every teacher, administrator, and researcher at the moment. And I know you’re sick of reading about AI. I am too (even if I’m the person who keeps writing blog posts about it). But I think it’s a question that we need to keep asking, especially considering that we are entering a phase of the AI hypecycle that we might call Pure Techno-optimism.
Pure Techno-optimism occurs when our overly enthusiastic embrace of a new technology threatens to erode the very reason for its existence. When this happens, the only justification for the use of that technology–in this case, generative AI as an education tool–is the continued perpetuation of the technology itself.
We can already see this somewhat in advice to “try out” AI or “experiment” with its use in various teaching and learning contexts. And not to neglect the plank in my own eye, I’ll admit that I’ve even been one of those voices, especially considering that I see direct engagement with new technologies as an effective way to cultivate critique, analysis, and reflection. But when done poorly, we are left with a pursuit of the technology for its own sake, utterly devoid of a value system beyond the usual suspects (change, innovation, efficiency).
Enter Socrat.ai, just one small example in the latest impulse to integrate AI into every facet of educational life. And I don’t mean to pick on them specifically; they just happened to be the company I stumbled upon while writing up this post.

Socrat.ai claims to be an “effective” and “responsible” way for students to be using AI in their learning process. Teachers can create “custom AI chatbots”, including things like a “debate-a-bot” feature to enhance argumentation skills or converse with characters from popular literary texts. Oh boy! Can’t wait to prompt engineer my way into a conversation with Gregor Samsa!
All right, all right, I’ll show my cards here. I’m actually not this anti-AI. I’ve been fascinated by this technology since it first came out. And how could I not? I spent almost a decade in grad school studying the ways in which language is not a stable representation of thought but rather a malleable, external entity that mediates meaning through a constant swirl of relational differentiation. And then computer scientists developed a technology that can actually do this (or at least pretend to do it). I think it’s incredible, and like many others have pointed out, I agree it’s not going anywhere and scholars in writing studies will need to grapple with its evolving role in the writing process.
But that’s not what comes to mind when I see tools like Socrat.ai. The moment I saw this, my first thought was: “Who wants this for their classroom, their school, and most of all, for their own kids?” The answer is nobody. Nobody wants this.
Our son is about to turn 4 years old, so we’re in the midst of considering potential elementary schools. And you know the kind of things that we’re looking for? The same things that every person working in tech, or probably peddling this kind of software, wants for their kids: hands-on, experiential classrooms that prioritize authentic engagement in a project-based learning environment. You know what we’re not looking for? Schools that promise to plunk our kids down in front of an iPad where they can interact with a literary simulacrum. (Ok, maybe if they rename it Baudrillard.ai I would be in).
Granted, early elementary school may not be the target market for this kind of software. I could actually envision interesting classroom activities utilizing AI tools like this that encourage students to reflect on the role of AI as a textual entity, how it constructs responses, how it aligns/misaligns with an original text, etc.
But based on the marketing rhetoric for these technologies, that’t not what they are looking for. That would be treating their software as a site of critique, treating it as a text requiring interpretation and, if necessary, rejection. Rather, the point seems to be, as it often is with AI ed-tech, to simply optimize (at best) or completely circumvent (more likely) the very modes of learning that they claim to be supporting.
And what are these modes of learning? Let’s take the main function of these “chatbot” style learning tools like Socrat.ai. What do they do? They are designed to offer students a way of engaging with an author’s ideas through a conversational interaction. Sounds an awful lot like a good ol’ fashioned classroom discussion. Or (I finally get it!), a Socratic dialogue! How clever…
I don’t mean to be too cheeky (ok, maybe I do a little bit). Facilitating authentic student engagement through classroom discussion is extremely difficult. But it seems like rather than trying to improve this existing pedagogical tool through (oh I don’t know, just spitballing here) things like funding, smaller class sizes, and better professionalization opportunities for teachers, many of these AI education tools threaten to replace them entirely.
And that, to me, is precisely the problem. In rhetorical terms, I actually think these tech companies have identified the right exigence: the modern education system often does a bad job creating authentic and engaging leaning environments, especially in relation to humanities subjects. They’ve just chosen the wrong solution for trying to fix it.
I’m sure the fine people at Socrat.ai will tell you this is not the purpose of the tool. It is a “collaborator,” a “support,” an “enhancement” of inquiry-based learning practices like Socratic dialogue. I doubt they pitch this tool as a way to get kids to stare at a computer all class. And look, I get it. As someone who loves to hold class discussions, it can be really hard. When you think you’ve asked a great question only to be met with a roomful of blank stares and 75 minutes left in a 90 minute class you would kill for a “supportive collaborator” to come in and save the day.
But that doesn’t mean we have to look for a techno-savior. The problem with Pure Techno-optimism is that it normalizes an affect of abandonment; abandonment of pedagogical protocols, tools, strategies, and practices that are seen as broken because they have failed to live up to their own aspirations. Kids don’t want to read a whole book? How about a chatbot interaction. Struggling to develop an outline for a paper? Here’s three AI generated versions.
Which brings me to my final point: what the hell are we even doing? Have we completely forgotten that the point of both of these activities is not for the student to become an expert on the subject matter or to produce an award-winning essay? The very point is to struggle! And sorry to all AI ed-techers out there with the latest and greatest custom student chatbots, but I can honestly not see how AI plays a role in any of this without posing the very serious risk of circumventing the messy, mistake-riddled path that learning must take in order to count as “effective” and “responsible.”
Commentaires