“I think the possibilities for ChatGPT to remove rote work from the classroom and empower deep learning experiences are exciting.” –Charlotte Dungan, COO of the AI Education Project
I confess that I’ve put off writing this post for months. During those months, I’ve done a lot of reading, thinking, and handwringing. And I confess that I’d never heard of the AI Education Project before I read about it in The Hill, but it doesn’t surprise me that there is such a project. I suspect it’s heavily funded and populated with technological and educational experts eager to push new tools and strategies into our classrooms. Because this is the way of things in education. For the past few decades, we’ve jumped on any new technological bandwagon, confident that this will be the silver bullet, the means by which we’ll raise test scores and compete with educational foes like China, Korea, and Finland. From Promethean boards (which, all too frequently, I’ve discovered being used as expensive screens onto which to project videos) to one-to-one computer initiatives to artificial intelligence platforms like ChatGPT, educators have been sold a pretty consistent bill of goods: The technology is out there, folks. We need to use it intelligently and responsibly.
I confess this, too: Artificial intelligence is here to stay. That barn door has been opened and won’t be closed soon, if ever. But I’d like us to carefully consider what “responsible” use of this actually means. I’d like us to come to a consensus about this, for it’s a Wild West of policies and practices out there now. This leaves administrators, teachers, students, and parents in an ethical No Man’s Land. Most importantly, this should leave us with crucial considerations: What constitutes “rote work,” and should we expect it from our students? Do we truly value the skills identified in national and state standards, skills like critical reasoning, problem-solving, close reading, drawing conclusions, and using strong evidence to support our claims? Should we expect our students—and our citizenry—to think or not?
If the answer to all of these questions is no, then we can farm out problem-solving, critical reasoning, researching, summarizing, interpreting, writing, and creating to AI. And we can call this “intelligent, responsible use.” If the answer is yes, the problem that educators now face seems insurmountable. In a recent BuzzFeed article, “Teachers Are Revealing Gen Alpha’s ‘Missing Life Skills’ That Were Second Nature To Anyone Who Grew Up In The ’90s, And I’m Honestly Shocked,” staff writers reported the skills teachers lamented their students lacked. These skills included organization, problem-solving, reading, and critical thinking. They also included soft skills, such as being able to ask good questions, advocate and think for oneself, persevere in challenging situations, self-regulate, and simply try. Yet, Damon Beres reports that the “rote work” of which former IBM COO Charlotte Dungan insists AI should remove from classrooms includes “researching a topic by seeking out multiple (human) viewpoints. Strengthening literacy skills by scouring resources for new information. Formulating conclusions by using logic to decipher the resources. Using rhetoric to create persuasive arguments to advance said conclusion.” Herein lies the problem. We apparently don’t agree on what students should be expected to do independently. And we don’t agree on what is “rote work” and what is “essential work.”
In his article, “AI Has Broken High School and College” (Aug. 2025), Atlantic writer Damon Beres writes, “AI has been widely adopted by students and faculty alike, yet the technology has also turned school into a kind of free-for-all.” In this article, he interviewed colleagues Ian Bogost and Lila Shroff , who admitted they were struck by the “normalization” of AI use. Shroff reported that in an article she’d written about K-12 education, three in ten teachers said they used AI weekly. In a recent Forbes article, Dr. Aviva Legett reports findings from a recent survey of college students (two-year, four-year, and graduate students) that “found that 90% of students have used AI academically, with nearly three-quarters saying their usage increased over the past year.” She argues that these findings echo those from a global study by the Digital Education Council that reports “86% of students across 16 countries already incorporate AI into their studies, with more than half using it weekly.” A July Newsweek article reports similar findings:
Quizlet’s 2025 How America Learns report revealed that 85 percent of teachers and students (age 14-22) now use AI in some capacity, marking a substantial increase from 66 percent in 2024. Among students, 89 percent reported using AI for schoolwork, compared to just 77 percent in the previous year.
Clearly, AI use has been “normalized” by a majority of students and an increasing number of educators. I’ve watched this normalization in the high schools I visit as a student teacher supervisor. Math homework? No problem! Just give the problem to AI, and it will solve it quickly. Reading homework? No problem! Just identify the text to AI for a brief, efficient summary. Writing homework? No problem! Just give the writing prompt to AI, and it will generate an essay in the blink of an eye. Research assignment? No problem! Just give the research topic to AI, and it will produce a research essay with multiple sources and accurate citations.
Interestingly, AI search engines don’t always accurately cite sources. In a March 2025 Fortune article, “AI search engines are confidently wrong more than half the time when they cite sources, study finds,” Beatrice Nolan cites a Columbia University study:
That’s according to a study from The Tow Center for Digital Journalism at Columbia University, which tested AI products like OpenAI’s ChatGPT Search and Google’s Gemini to assess their ability to accurately cite news articles. The analysis probed eight AI systems and found that, collectively, the bots provided incorrect answers to more than 60% of queries.
Researchers also reported that “despite poor results, the AI bots answered queries with ‘alarming confidence,’” and “rarely used any qualifying phrases, such as ‘it appears,’ ‘it’s possible,’ or ‘might.’” Award-winning teacher and former civil rights attorney Joseph R. Murray asks, “Will students think to double-check the AI? We already gave them a green light to coast through their classes.” He worries that giving AI to students who woefully lack content knowledge is like leaving toddlers “unattended in the candy store” (“Think our education system is bad now? Wait ’til AI takes over,” The Hill, 4/05/2025). In response to his question as to whether or not students will fact-check their AI citations and content, the answer is unequivocally no—at least for most students, high school and college, who will neither take the time nor possess the research and critical thinking skills to fact-check anything.
For the past few years, I’ve felt the anguish of teachers who’ve resigned themselves to the fact that almost any work taken out of the classroom will be completed through AI. I’ve heard the legitimate questions they’re asking: Why teach writing or reading, problem-solving or reasoning when students will just use AI? Why should I be held accountable for proficiency in the skills identified in my state standards when my students aren’t independently learning and practicing them? How do I prove that my students have done their own work—and will this even matter to parents and administrators? What is my role as a teacher now?
As a retired English teacher, I’ve tried to imagine what my career would look like now. If I value what I’ve always valued–critical thinking, close reading, thoughtful, well-researched writing— here’s what I’ve concluded: 1) All student writing would have to be completed during class time on paper, as I constantly monitored student progress; 2) All important reading would have to be completed during class time with paper—not digital—texts, again as I walked around the room, continually monitoring “real vs. fake reading”; 3) Despite in-class writing and reading assignments, all evaluation would be fraught with the challenge of determining if it was genuinely “student work”; 4) Consequently, justifying my grading practices would be fraught with student, parent, and administrator challenges. In short, if I wanted to uphold the educational values to which I’d devoted my life and to teach as I’d done for 41 years, I would have to change my practices and policies, resigning myself to the reality of students who constantly worked to “get around” them. All of this makes me sadder than I can say. And it makes me fearful, for it doesn’t require any real prophetic skills to predict how a nation of non-thinkers will ultimately fare.
For years, I’ve read and heard that many tech creators and gurus deliberately send their own children to schools that don’t rely on technology. There are schools like this, but they are the exceptions, not the rule. For most parents and students, their schools rely heavily–some exclusively–on technology. Students read, compute, write, and create on digital devices. With the normalization of AI, they can do even more. Actually, they can do it all. I’ve read of teachers who attempted to outwit AI by creating writing prompts and problems they believed were foolproof. These are valiant, well-intentioned efforts that AI will address with “alarming confidence.” And given the amount of student work most teachers must grade, they’ll inevitably be worn down with AI-generated responses that may not directly address the writing prompt or may not show evidence of math procedures taught in class, but are close enough. And they’ll be worn down with challenges to the authenticity of student work, mandatory parent phone calls, and persistent defenses of grading practices. They’ll just be worn down, exhausted to the point of submission.
Again, I’m painfully aware that AI isn’t going anywhere. Even schools that have adopted stricter cell phone policies can’t truly meet the challenges of AI. Some may argue that these policies are a step in the right direction, and they may be right. Still, although teachers and researchers may lament how students lack perseverance and problem-solving skills, I’ve seen just how diligent and resourceful many students can be when trying to work around school policies and computer firewalls. Some have shown amazing grit and ingenuity. A former colleague once joked, “I only wish these students would apply the same skills to their coursework.” Indeed, if only.
In closing, I’m making a plea for our communities and school boards to address the real concerns of AI. For the sake of our teachers and students, for the greater good of our future citizenry, we’re ethically responsible for carefully studying the impact of AI. And if we want to retain the good teachers we have and recruit quality teachers, we’re responsible for creating clear policies and support regarding AI, addressing the real challenges of widespread student use, and evaluating the efficacy of holding teachers and schools accountable to standards and practices that current AI usage makes virtually impossible to uphold. Above all, we’re responsible for determining whether we truly want our students to think—and think well—or if we’re willing to raise the white flag and concede this to artificial intelligence.




