You have /5 articles left.
Sign up for a free account or log in.

Years ago, as a fledgling assistant professor at the University of Houston, I received a book review from a doctoral student that stunned me with its style and insight—more impressive than any I had encountered during my graduate studies at Yale, where my peers included future Pulitzer and Bancroft Prize winners.

I rushed to the library—remember, this was pre-internet—to check if it was plagiarized, but it wasn’t. That moment marked my first glimpse into the work of a remarkable writer, someone who is able to reach not only a scholarly audience but a broader public as well.

This incident came to mind when I ran the opening paragraphs of one of my recent “Higher Ed Gamma” posts through an AI detector. In that post, I described an undergraduate musical theater concert—a performance witnessed only by me and about 50 parents and classmates and never formally reviewed. Yet, the detector labeled the text as 100 percent AI-generated, leaving me both shocked and bewildered.

Yes, AI can contribute to plagiarism, but before jumping to conclusions, I’ll remember my own experience. Academia is built on trust, and, as Ronald Reagan advised, we must trust but verify. But we must always approach these issues with a measure of humility.

A Crisis of Trust

The rise of AI in academic writing has sparked a profound crisis of trust within the academy. Faculty members now find themselves questioning whether a student’s work is genuinely their own or the product of a large language model. This places educators and learning in an adversarial position, which is good for no one.

This erosion of trust stifles the collaborative, exploratory spirit that higher education is meant to nurture.

Academia relies on a foundation of trust—trust that students will produce original work and that faculty will guide and assess it fairly. When AI-generated content becomes increasingly sophisticated and accessible, it blurs the line between authentic student effort and machine assistance. This uncertainty has fostered an atmosphere of suspicion rather than support.

Students, feeling constantly scrutinized, become defensive or disengaged, turning the educational process into a game of cat and mouse rather than a genuine learning experience.

This adversarial dynamic harms both parties. Faculty members consciously or unconsciously begin to see their job as policing academic integrity rather than enriching the learning environment. Meanwhile, students experience heightened anxiety, reduced creativity and a reluctance to experiment with their writing—fearing that any assistance might be misconstrued as cheating.

The result is an atmosphere where skepticism and caution replace trust and openness, eroding the very essence of scholarly inquiry.

The crisis of trust provoked by AI is not just about the authenticity of written work—it reflects broader concerns about accountability, fairness and the evolving nature of knowledge in a digital age. To address this challenge, institutions must invest in robust, transparent policies and foster an environment where both students and faculty can embrace technological advances without compromising the integrity of the academic process.

The essential problem is that faculty-student interactions are scant. I, for one, know little about my students’ ideas, perspectives or their ability to express themselves in writing. With only a few papers to go on and limited written feedback as my primary form of response, it’s difficult to develop an atmosphere of trust or provide truly constructive guidance.

Only by rebuilding trust can academia ensure that it remains a space for genuine learning, creativity and collaboration in an era defined by both unprecedented innovation and deep uncertainty.

My Life as a Bot

Distrust not only strains faculty-student relationships; it also casts doubt on the authenticity and integrity of scholarly writing.

I was recently surprised—and more than a little dismayed—to discover that several of my “Higher Ed Gamma” blog posts had been flagged by an AI detection tool as likely machine generated.

Over the past decade, I’ve written more than 800 pieces for Course Strat, the vast majority well before the launch of ChatGPT in late November 2022. To my eye, the tone, style and approach of these essays have remained remarkably consistent over time.

Yet according to ZeroGPT, one AI-detection platform, 27.5 percent of a January 2019 piece titled “The Sociology of Today’s Classroom” was deemed likely to contain AI-generated text. Even more baffling, a post on “The Death of Comedy,” published eight days before ChatGPT’s release, scored 30.58 percent.

I take great pride in the originality of my ideas, which is why any suggestion that questions my authorship feels deeply unsettling. In an era when authorship is increasingly suspect and trust is easily shaken, it’s difficult to know how best to respond—or how to reassure readers and editors of the integrity of one’s work.

As a longtime academic and public writer, I take great care in crafting my essays and books. My goal has always been to read widely, engage deeply with emerging scholarship and bring meaningful insights to a broader audience. I try to synthesize ideas, analyze trends and place new developments in historical context.

To have that kind of work flagged as machine-generated triggers a mixture of frustration and irony.

We have entered a strange moment when fluency itself raises suspicion. The clearer, more structured and more coherent the writing, the more likely it is to be labeled artificial. The very qualities we’ve long celebrated as signs of strong writing—clarity, concision, logic, polish—are now treated as liabilities by algorithms trained to detect what “looks” like AI.

In an unexpected twist, the more clearly one writes, the more likely one is to be doubted.

Let me offer a few unnerving examples.

According to ZeroGPT’s AI detector, George Orwell’s classic 1936 essay “Shooting an Elephant” is mostly AI-generated—earning a suspicious 53.97 percent score. Apparently, Orwell was ahead of his time, cranking out reflections on colonial guilt with the help of a large language model in pre–World War II Burma.

But even that pales in comparison to Abraham Lincoln, whose Gettysburg Address registers as 100 percent AI-generated. Who knew ChatGPT had time-traveled to 1863 to help draft one of the most iconic speeches in American history?

To test the tool further, I asked ChatGPT to describe what AI detectors are. Its fully machine-generated answer scored 24.67 percent machine generated—which is somehow less robotic than Graham Allison’s highly influential 2014 essay “The Thucydides Trap,” which clocked in at 26.31 percent. Or take the opening paragraphs of Richard Hofstadter’s elegant “The Paranoid Style in American Politics”—they scored 49.14 percent.

In case that irony isn’t rich enough, I then asked ChatGPT to respond to the prompt “Be wary before accusing someone of using AI.” Its reply—entirely written by AI—warned of the unreliability of detectors, the dangers of false accusations and the blurry boundary between appropriate and inappropriate AI use. That response? Just 17.86 percent AI-generated, according to ZeroGPT.

Which raises a key question: What exactly are these detectors measuring?

AI detectors flag text based on patterns—not provenance. They look for surface features often associated with machine-generated prose: highly formal, grammatically polished language, repetitive sentence structures, abstract or academic phrasing and the use of passive voice.

They also penalize content that resembles common online topics—especially those frequently used to train large language models.

But these markers don’t prove anything about authorship. They merely suggest that a piece of writing looks like something an AI text takeads-platform-verification might have written. That’s not detection—it’s guesswork dressed up as statistical certainty.

In fact, these tools often end up punishing precisely the kind of writing we should value: clear, structured, thoughtful prose.

And to bypass them? That’s easy. For a fee, ZeroGPT promises to “humanize” your writing and render it undetectable.

How does it do that? Apparently, by intentionally making the writing less polished—breaking up sentence flow, inserting minor errors, swapping out sophisticated vocabulary for simpler words and replacing formal transitions like “moreover” with more casual phrasing. In other words, you can pay to sound less like a good writer.

The result is perverse. We now live in a world where Abraham Lincoln sounds too much like a robot and ChatGPT sounds like a human when it defines a concept or develops an idea.

When AI detectors wrongly flag classic essays, student work or carefully edited prose as suspicious, they don’t just make mistakes—they chip away at trust. In classrooms, teachers begin to doubt their best writers. In workplaces, clarity is confused with dishonesty. In public discourse, precision becomes a liability.

We risk punishing exactly the kind of communication we should be encouraging.

And here’s the kicker: Most AI detectors are trained on samples of writing, some AI-generated and some not. They aren’t detecting academic dishonesty. They’re just scanning for patterns. They don’t know whether a paragraph came from a chat bot, a college sophomore or a Pulitzer Prize winner. It’s like judging whether a meal was home-cooked based on how evenly the vegetables are chopped.

That’s not literacy. It’s algorithmic paranoia.

Is it inherently wrong to use AI to polish prose—for clarity, concision, coherence or flow? I don’t think so. You might disagree, but to me, there’s nothing unethical about using a tool to make your writing sharper and more readable. The same goes for using AI to refine your ideas or tighten your logic.

That’s not cheating—that’s editing.

What is wrong is misrepresenting authorship—passing off AI-generated content or ideas as entirely your own. The line isn’t about whether you use AI; it’s about honesty, transparency and ownership.

Steven Mintz is professor of history at the University of Texas at Austin and recipient of the AAC&U’s 2025 President’s Award for Outstanding Contributions to Liberal Education.

Next Story

Written By

More from Higher Ed Gamma