I’m relieved to not have any teaching obligations this term. I’ve been doing weekly homework problems/quizzes using the university standard Canvas tool, and I’ve always been pretty liberal with that: if students want to work together on the problems, that’s all to the good. Communicating and helping each other is useful for learning.
But I’m getting all these emails now about a feature that was added. AI. There’s a box on the screen to invoke Google Lens and Homework Helper, so I could be putting all the effort into composing a problem set, and the students could solve it by pushing a button. The university has been putting in something called Honorlock to disable AI access in problem sets, which seems to working inconsistently.
I’m not alone in resenting all these shortcuts that are being placed in our teaching.
It’s a sentiment that pervades listservs, Reddit forums and other places where classroom professionals vent their frustrations. “I’m not some sort of sorcerer, I cannot magically force my students to put the effort in,” complains one Reddit user in the r/professor subreddit. “Not when the crack-cocaine of LLMs is just right next to them on the table.” And for the most part, professors are on their own; most institutions have not established blanket policies about AI use, which means that teachers create and enforce their own. Becca Andrews, a writer who teaches journalism at Western Kentucky State University, had “a wake-up call” when she had to fail a student who used an LLM to write a significant amount of a final project. She’s since reworked classes to include more in-person writing and workshopping, and notes that her students — most of whom have jobs — seem grateful to have that time to complete assignments. Andrews also talks to her students about AI’s drawbacks, like its documented impact on critical-thinking faculties: “I tell them that their brains are still cooking, so it’s doubly important to think of their minds as a muscle and work on developing it.”
Last spring’s bleakest read on the landscape was New York Magazine’s article, “Everyone Is Cheating Their Way Through College,” which included a number of deeply unsettling revelations from reporter James D. Walsh — not just about how widespread AI dependence has already become, but about the speed with which it is changing what education means on an empirical level. (One example Walsh cites: a professor who “caught students in her Ethics and Technology class using AI to respond to the prompt ‘Briefly introduce yourself and say what you’re hoping to get out of this class.’”) The piece is bookended with the story of a Columbia student who invented a tool that allowed engineers to cheat on coding interviews, who recorded himself using the tool in interviews with companies, and was subsequently put on academic leave. During that time, he invented another app that makes it easy to cheat on everything. He raised $5.3 million in venture capital.
I’m left wondering, who is asking for these widgets to be installed in our classes? Are there salespeople for software like Canvas who enthusiastically sell these features for cheating to university administrators who think more AI slop benefits learning? Why, if I’m trying to teach genetics, do I have to wrestle around garbage shortcuts imposed on me by the university that short circuit learning?
Several years ago, I was happy to embrace these new tools, and found it freeing to be doings exams and homework online — it meant 4 lecture hours in the semester that weren’t dedicated to proctoring students hunched over exams. No more. When I get back into a class in the Spring, I’m going to be resurrecting blue books.
Oh, and since I was wondering who kept shoveling this counterproductive crap into my classes, I’ve got one answer.
It’s not coincidental that the biggest booster of LLMs as a blanket good is a man who, like many a Silicon Valley wunderkind who preceded him, dropped out of college, invented an app and hopped aboard the venture-capital train. As a leading booster of AI, Sam Altman has been particularly vocal in encouraging students to adopt AI tools and prioritize “the meta ability to learn” over sustained study of any one subject. If that sounds like a line of bull, that’s because it is. And it’s galling that the opinion of someone who dropped out of college — because why would you keep learning when there’s money to be made and businesses to found? — is constantly sought out for comment on what tools students should and shouldn’t be using. Altman has brushed off educators’ concerns about the drawbacks of AI use in academia and has even suggested that the definition of cheating needs to evolve.