HomeBreaking NewsCaught in the Machine: How AI Is Upending the Classroom and What It Means for the Caribbean

Caught in the Machine: How AI Is Upending the Classroom and What It Means for the Caribbean

Caught in the Machine: How AI Is Upending the Classroom and What It Means for the Caribbean

Caught in the Machine: How AI Is Upending the Classroom and What It Means for the Caribbean

When Ailsa Ostovitz sat down to write a personal essay about the music she loves, she wasn’t thinking about artificial intelligence. She was thinking about the songs. But when her Maryland high school teacher ran the assignment through an AI detection program, the software flagged it as possibly machine-generated and docked her grade. Ostovitz had never used AI on the assignment. “I write about music. I love music,” she told NPR. “Why would I use AI to write something that I like talking about?”

Her experience is not an outlier. It is, increasingly, the norm. Across the United States, the United Kingdom, and universities worldwide, the arrival of AI writing tools like ChatGPT has triggered an institutional panic, and a deeply flawed technological response. And while the headlines have largely focused on American and British campuses, the same forces are reshaping classrooms in the Caribbean, including here in Belize, where students preparing CSEC and CAPE School-Based Assessments are now navigating a landscape of rules, detectors, and consequences that didn’t exist two years ago.

How AI Is Upending the Classroom and What It Means for the Caribbean

How AI Is Upending the Classroom and What It Means for the Caribbean

The Cheating Is Real…And Growing Fast

Let’s start with what is undeniably happening: a significant number of students are using AI to cheat, and the numbers are climbing sharply.

A sweeping investigation by The Guardian, drawing on data from 131 UK universities, found nearly 7,000 proven cases of AI-related academic misconduct in the 2023–24 academic year, equivalent to 5.1 cases per 1,000 students, up from 1.6 per 1,000 the year before. A survey by the Higher Education Policy Institute found that 88% of students are now using generative AI for assessments, up from 53% the previous year. Researchers at the University of Reading went further, testing their own systems and finding that AI-generated work slipped through undetected 94% of the time.

Traditional plagiarism, by contrast, is in freefall. Before widespread AI availability, it accounted for nearly two-thirds of all academic misconduct. Now it is being displaced by something harder to detect and harder to define. The cheating hasn’t disappeared. It has transformed.

Casey Cuny, an English teacher in California who has taught for 23 years, put it bluntly in a September 2025 Associated Press interview: “The cheating is off the charts. It’s the worst I’ve seen in my entire career.” Another professor, Stephen Cicirelli at St. Peter’s University in New Jersey, captured the absurdity of the moment in a social media post that went viral: one of his students submitted an AI-written paper, then apologized with an email that also appeared to have been written by ChatGPT.

The Detectors Don’t Work, But Schools Keep Buying Them

The institutional response has been to fight AI with AI. Turnitin, used by over 16,000 academic institutions globally, introduced an AI detection feature in 2023. Competing tools like GPTZero and Copyleaks followed. Broward County Public Schools in Florida is spending more than $550,000 on a three-year Turnitin contract. A school district in Ohio is paying GPTZero about $5,600 annually for 27 teachers.

The problem: researchers say these tools are not reliable enough to be trusted with consequential decisions about students’ futures.

“It’s now fairly well established in the academic integrity field that these tools are not fit for purpose,” says Mike Perkins, a leading researcher on academic integrity and AI at British University Vietnam, the same researcher whose published work, notably, the Caribbean Examinations Council itself cites in framing its own AI assessment scale. Perkins found that popular detectors including Turnitin, GPTZero and Copyleaks flagged genuine student work as AI-generated, with accuracy rates dropping further when AI text was manipulated to appear human.

Turnitin acknowledges this on its own website, stating that its AI detection “may not always be accurate… so it should not be used as the sole basis for adverse actions against a student.” GPTZero’s own CEO, Edward Tian, is equally direct: “We definitely don’t believe this is a punishment tool.”

How AI Is Upending the Classroom and What It Means for the Caribbean

How AI Is Upending the Classroom and What It Means for the Caribbean

Despite all of this, more than 40% of surveyed 6th- to 12th-grade teachers in the US used AI detection tools last school year, according to a nationally representative poll by the Center for Democracy and Technology, even as many privately acknowledged the tools’ limitations. Some major universities, including UCLA and UC San Diego, went further and deactivated AI detectors entirely in 2024–2025, citing unacceptable false positive rates.

International Students Are at Particular Risk

Among the most alarming findings, and among the most relevant for the Caribbeanis that AI detectors appear to be systematically biased against students who write in English as a second or additional language.

A group of Stanford computer scientists found that seven AI detectors flagged writing by non-native English speakers as AI-generated 61% of the time. On about 20% of those papers, the incorrect assessment was unanimous across multiple tools. Meanwhile, the detectors almost never made such mistakes when assessing native English speakers.

The reason is structural: AI detectors are programmed to flag writing as machine-generated when word choice is predictable and sentences are syntactically simple. Writing by non-native English speakers often fits this pattern. Co-author Weixin Liang put it plainly: “The design of many GPT detectors inherently discriminates against non-native authors, particularly those exhibiting restricted linguistic diversity and word choice.”

For Caribbean students, this is not an abstract concern. Consider that across the CSEC and CAPE cohort, English is the medium of instruction for many students who speak Kriol, Garifuna, Spanish, or other languages at home, including a significant portion of Belize’s student population. A student who writes with straightforward sentence structure because they are translating thought from a first language could, in theory, be flagged by an automated tool in exactly the same way that a non-native speaker at an American university might be.

Taylor Hahn, a Johns Hopkins University teacher who noticed a pattern of Turnitin flagging international students’ work, called one student in for a meeting, and the student immediately produced drafts, annotated PDFs, handwritten notes. Clear evidence of original work. The software had simply been wrong.

The Arms Race Escalates: Enter the ‘Humanizers’

As detectors proliferated, a counter-industry emerged almost instantly. Over 150 so-called “humanizer” tools now exist, software that scans AI-generated text and rewrites it to defeat detection algorithms. Some are free; others cost around $20 a month. Together they drew 33.9 million website visits in a single month.

Some users rely on humanizers to hide actual cheating. Others, and this is the part that should give pause, say they’ve never used AI at all, but run their own genuine work through humanizers simply to avoid being falsely accused.

How AI Is Upending the Classroom and What It Means for the Caribbean

How AI Is Upending the Classroom and What It MeHow AI Is Upending the Classroom and What It Means for the Caribbeanans for the Caribbean

Brittany Carr, a student at Liberty University, was flagged after writing a personal essay about her cancer diagnosis. “How could AI make any of that up?” she wrote to her professor. “I spoke about my cancer diagnosis and being depressed and my journey and you believe that is AI?” Worried that further accusations could cost her VA financial aid, she began running everything through Grammarly’s AI detector and rewriting whatever it flagged. “But it does feel like my writing isn’t giving insight into anything. I’m writing just so that I don’t flag those AI detectors.” After the semester ended, she left the university.

Back in Maryland, Ailsa Ostovitz now runs every assignment through multiple detection tools before submitting, adding about half an hour to every piece of work she writes entirely herself. Turnitin has since launched “bypasser detection” targeting humanizer alterations. Some humanizer tools have responded by simulating keystroke patterns to defeat browser-based tracking. As one student put it: “So it’s like, how far do you want to go down the rabbit hole? I’m making myself crazy.”

The Caribbean’s Different Approach

While the detection arms race spirals in North America and Britain, the Caribbean Examinations Council has been working toward something different, an approach that puts human judgment at the center and treats AI as a resource rather than a threat to be defeated.

In a video address released this month titled “Who You Choose to Be,” CXC Director of Operations Dr. Nicole Manning spoke directly to students and teachers across the region. Her message on AI detection tools was unambiguous: “AI checkers are one input. They are not the verdict. There will be human interventions right through the process to ensure fairness.”

The context behind that statement matters. A December 2024 CXC study found that roughly 70% of Caribbean nations lack official AI policies or structures, even as the tools have become ubiquitous in classrooms. That gap prompted CXC to develop a comprehensive Standards and Guidelines framework, effective for the 2026 May-June examinations… the same sitting that students across Belize and the region are currently preparing for.

Under the framework, AI is permitted in SBAs, but with clear boundaries. Students may use it to brainstorm ideas, understand concepts, explain difficult terms, or generate structural suggestions. They may not submit work generated wholly or significantly by AI. Where AI has been used in any capacity, students are required to submit a Disclosure Form and an Originality Report. Where it has not been used, no such documentation is needed. The acceptable AI similarity threshold is set at 20%, with teachers required to provide rationale for submissions that exceed it.

How AI Is Upending the Classroom and What It Means for the Caribbean

How AI Is Upending the Classroom and What It Means for the Caribbean

CXC’s framework draws explicitly on the AI Assessment Scale developed by researcher Mike Perkins—the same academic who has publicly argued that commercial AI detectors are “not fit for purpose.” The Council appears to have absorbed that lesson. Rather than outsourcing judgment to software, it has built a system centered on teacher knowledge of their students over time. “The teacher-student relationship built over months of observation, drafts, conversations, and guidance remains central to how SBAs are moderated and assessed,” Dr. Manning stated.

CXC Registrar and CEO Wayne Wesley has been equally direct about the pedagogical implications: “You have to engage students in more one-on-one conversations to appreciate whether the work they are presenting is truly their own. It also requires us to re-think how assessment is done from a summative and formative standpoint.”

The University Level: UWI Moves Toward Systemic Change

At the tertiary level, The University of the West Indies, whose Open Campus serves students across Belize and the broader Caribbean, has been grappling with the same pressures.

In late April 2026, UWI entered a partnership with the University of the West of Scotland to participate in the IntegraGuard Project, a platform designed to reimagine academic integrity systems as “fair, data-driven, transparent and future-ready.” The collaboration positions UWI alongside a select group of international universities working to build integrity frameworks that integrate AI-assisted detection with human investigation workflows…not replace the latter with the former.

UWI has also recently completed its own Artificial Intelligence Policy Framework and established a dedicated AI Institute at its St. Augustine Campus. The institute is specifically designed to address the Caribbean’s unique development challenges through AI, a recognition that the region cannot simply import policy frameworks designed for contexts with different languages, histories, and educational traditions.

In the words of CXC’s Director of Technological Innovation Rodney Payne, reflecting on the broader policy push: “For us to benefit as a region, we need harmonious development, utilising the technologies across the board. It’s not going to help us if one state moves ahead quickly and the others are struggling to follow.”

No Easy Fixes

Most experts agree that punishing students based on unreliable algorithmic verdicts is not the answer. Carrie Cofer, a high school English teacher in Cleveland, tested GPTZero by uploading a chapter of her own PhD dissertation and it came back 89 to 91 percent AI-written. “I don’t think it’s an efficacious use of their money,” she said of AI detection spending. “The kids are going to get around it one way or the other.”

Erin Ramirez, an associate professor at California State University Monterey Bay, offered a summary that carries particular weight in a Caribbean context: “Students now are trying to prove that they’re human, even though they might have never touched AI ever.” For students already navigating the linguistic and economic pressures many Caribbean learners face, the additional burden of proving their own humanity to a machine should be a concern for everyone in the education system.

The University of Pittsburgh has already moved away from AI detection entirely, concluding that false positives “carry the risk of loss of student trust, confidence and motivation, bad publicity, and potential legal sanctions.” Most institutions, globally, haven’t followed — yet.

CXC’s Dr. Manning offered the clearest articulation of the alternative. “Integrity is not about whether a machine can detect what you did. It is about who you choose to be.”

Facebook Comments

Share With: