The fear was palpable at the end of 2022, with the introduction of ChatGPT supposedly foretelling the end of content creation as we know it. The education sector was especially petrified – “Would students even write anything on their own?”
But the solution was at hand! In the form of a tool that could check if your content was written by Artificial Intelligence (AI). Bells rang across the globe, as professors and writers breathed a sigh of relief.
That is until errors started cropping up with these AI detectors, raising the question in everyone’s mind – Does it even work?
How do AI detectors work?
AI content detectors, or AI writing detectors, use language models that are similar to those used by AI generators, which examines the entered content and verifies whether it could be written by an AI tool.
In this process, the models generally look at two factors, amongst many others:
Perplexity –
This is a measure of how unpredictable the content is (or how much it would confuse the average reader). Human writers have more perplexity, more creative word choices, and more mistakes. Content with low perplexity typically signals AI-generated content - they read more smoothly but also contain more predictable word choices.
Burstiness –
This measures the variation in sentence structure - similar to perplexity, but more concerned with sentences than words. Content with a low level of burstiness is indicative of AI generation. The AI language models predict the most-likely word to follow in a sentence, making sentences smoother but shorter. The sentences also display a more conventional structure, a factor that lends AI a rather monotonous tone.
Word frequency, sentence length, n-gram patterns, unusual semantic structures, redundancies, and overall lack of context can signal AI-generated content to the detectors.
What’s the problem with content detectors?
The main limitation of AI detectors is that they aren’t foolproof.
Similar to how AI-generated content has not reached a level of efficiency and creativity that makes it palatable, AI detectors are falling short of their intended purpose. Most detectors can only provide a percentage of the content that was written by humans or AI - Which does not solve the problem. You could change a couple of words and see the percentage change in either direction.
Researchers have concluded that available detection tools are neither accurate nor reliable and may have a bias towards classifying the output as human-written rather than detecting AI-generated text.
Reasons for this failure:
Rapid advances in AI - Development of AI is progressing at a swift pace and content checkers are falling behind. As language models improve or become more complex, the content becomes even more realistic and therefore harder to detect for these tools.
Combining human and AI-generated content - Information nowadays is an amalgamation of both human and AI-generated content, with detection tools finding it hard to identify which parts are human and which are AI. Especially for those content detection tools that are designed to catch specific AI patterns and structures.
Data Diversity and Generalization - AI content checkers heavily rely on training data to identify patterns associated with AI-generated content, but data diversity can make this challenging. Further, generalizing the patterns across different AI architectures and applications can be difficult, leading to potential false positives or false negatives in content detection.
Evasive content - This involves crafting content that is deliberately designed to evade AI content detectors. These “attacks” are often hard to counter or defend against, as they target and exploit vulnerabilities in the detection algorithms.
Contextual Understanding - Writing is layered with context and intent that is often hard for AI detection tools to comprehend. When this happens, content can be mislabeled as AI-generated.
Some examples of content detectors failing
Most of these scenarios have come up in the education sector, as it’s a pressing concern for educators. However, the learnings apply to anyone interested in using AI content.
Texas A&M – Students were temporarily denied their diplomas when a professor flunked the class after using an AI detector to assess their final submissions. He asked ChatGPT to check if the software was used to write the papers and the AI tool claimed that it had authored every single one.
However, ChatGPT isn’t necessarily programmed to detect material created by AI. Students offered exonerating evidence, including timestamps on Google Docs but the professor initially ignored this. The university investigated the matter and the students were cleared of any academic dishonesty, with their grades assessed offline. The university is now developing policies on the use of AI in classrooms.
Turintin - This plagiarism tool added a feature to detect the use of ChatGPT in content, but ran into trouble when this feature provided skewed results. In an experiment, where Turintin tested 16 samples of essays written by 5 high-school students, it incorrectly tagged 50% of the test samples as AI-generated.
Since the launch of ChatGPT, Turintin’s error rate has leaped from 1% to 4%. Even their Chief Product Officer, Annie Chechitelli said - “We cannot mitigate the risk of false positives completely given the nature of AI writing and analysis, so it is important that educators use the AI score to start a meaningful and impactful dialogue with their students in such instances.”
TOEFL - In a recent study, AI content detectors were tested on dataset essays written for the TOEFL test. The essays were obtained from a Chinese education forum. Another set of essays, penned by American eighth graders, was used in comparison. The results revealed a dramatically lower accuracy for the TOEFL test essays.
While these tests were only sourced from one country, the study found evidence that content from non-native English writers tends to display lower perplexity than those from native writers. As a result, the TOEFL test essays and any other non-native English writer’s content seemed more “AI-like” to these detection tools. It not only showcased a bias but also failed in its intended function.
How to bypass AI detection tools
The failure of AI detection tools is demonstrated in the ease with which you can bypass them. While we don’t recommend using this for assignments or when handling sensitive company data or in scenarios where you’ve been explicitly told not to use AI, you can use this quick cheat-sheet to get past the detectors. Remember to always add your human insight to avoid creating generic and predictable content.
Use AI for creating your first draft, not the final one.
Introduce your own sentences or paragraphs into the AI-generated content.
Put your AI-generated content through a paraphrasing tool.
Check the content for repetitive phrases or words.
Test it yourself – keep modifying till it passes the free AI detection tools.
Introduce randomness into the content, with colloquialisms and idioms.
Our recommendations
For people checking content – Don’t use AI detectors yet
For people creating content – Be smart when using AI
AI generators and AI content detectors will eventually play their role in maintaining the authenticity of digital content. We expect enhanced training data, robust defenses, better explainable actions, and context-aware techniques to potentially put AI detectors on par with AI generators.
It’s an important piece of the puzzle and can hopefully allow us to embrace transparency, fairness, and robustness in digital content.
In case you’re wondering, here are some popular AI detection tools:
GPTZero
Writer
Botometer
Copyleaks
OpenAI’s Text Classifier
Use them at your own discretion!
If you’ve had an interesting experience with detectors or think you know which is the best AI detection tool, write to us here.
Want to cut the clutter and get information directly in your mailbox?