"

Challenges with AI Detection Products and their Use

Plagiarism detection started with the professor manually reading every assignment and trying to find a place where the student copied material without mentioning the source of such material. The next step was the digital solution, such as detection software which looks for matching content between what is in a database and what the student turns into the professor. This involved serious ethical issues, such as student privacy and their right to their intellectual property—issues that haven’t yet been adequately addressed by academia. (Leong and Zhang, 2025)

AI Detection Limitations

Today Artificial Intelligence tools are trained to detect copying, paraphrasing, translated text and AI-generated content—and the situation is even more problematic. AI detection can lead to a variety of undesirable outcomes.

False positives

False positives can come from different scenarios. One scenario is where a student’s work uses terms and other jargon used by the professor; if those terms and other jargon are used repeatedly by students, AI detectors may count that as plagiarism. Another source for false positives is many students submitting work that have conducted similar research, writing, or problem-solving; when the AI detector compares newly submitted assignments to older ones, it may consider the new ones as AI-generated.

Bias

A person familiar with technology would think that AI bias would only come from the data that an AI tool is trained in—which certainly is a widely established problem. But the more consequential problem in education has to do with how Large Language Models (LLM) actually work. AI algorithms may inadvertently exhibit bias in detection because they rely on language datasets that only have one language pattern and academic style (Leong & Zhang, 2025). They cannot detect other language sets or ways of conveying languages. This can disadvantage non-standard academic English students.

AI Evasion Techniques

Perkins et al. (2024) examined six widely used generative AI text detectors to assess their sensitivity to simple adversarial tricks for evading detection. Straightforward manipulations—such as deliberately introducing spelling mistakes, varying sentence lengths (a technique known as “burstiness”), or increasing syntactic complexity—substantially reduced detection rates.

Recommendations for AI Detection

AI detection tools differ significantly in their design and capabilities. Leading tools typically rely on either Natural Language Processing (NLP), which analyzes syntax, semantics, and contextual meaning, or Machine Learning (ML), which detects patterns across large datasets of human- and AI-generated content and continuously improves through retraining. Beyond technical function, data handling practices also vary, raising concerns about data privacy and institutional control. For example, Microsoft’s Copilot integrates with campus security systems, offering more institutional oversight compared to web-based solutions.

Emerging scholarly consensus supports a multi-layered approach to academic integrity: combining traditional plagiarism detection with AI-specific tools. However, automated tools should not replace human judgment. Faculty must interpret AI-generated reports carefully and avoid making accusations based solely on algorithmic flags. Overreliance on detection software can undermine student trust and discourage engagement. Human oversight remains essential to fair and pedagogically sound academic practice.

AI detection tools must be evaluated within the broader context of teaching, learning, and institutional responsibility. Institutions must resist the urge to implement top-down mandates without student input. A holistic, participatory approach increases both the legitimacy and effectiveness of academic integrity efforts, and student-centered policy and pedagogy are essential.

In short, AI detection is not only unreliable but may erode trust between faculty and students even more than traditional plagiarism detection tools. If faculty or departments decide to use AI detection tools, they must be integrated into teaching that is dialogic, ethical, and attentive to student experience. Our professional responsibility is not just to detect misconduct, but to guide students in using AI wisely and well.

License

Icon for the Creative Commons Attribution-NonCommercial 4.0 International License

AI in Action: A SUNY FACT2 Guide to Optimizing AI in Higher Education Copyright © 2025 by SUNY FACT2 Task Group on AI in Action; Kati Ahern; Nicola Marae Allain; Abigail Bechtel; Angie Chung; Billie Franchini; Meghanne Freivald; Ken Fujiuchi; Dana Gavin; Jack Harris; Keith Landa; Alla Myzelev; Victoria Pilato; Ahmad Pratama; Russell V. Rittenhouse; Carrie Solomon; Angela C. Thering; and Shyam Sharma is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License, except where otherwise noted.