twitter
youtube
instagram
facebook
telegram
apple store
play market
night_theme
ru
arm
search
WHAT ARE YOU LOOKING FOR ?






Any use of materials is allowed only if there is a hyperlink to Caliber.az
Caliber.az © 2026. .
WORLD
A+
A-

Why even AI face difficulties detecting AI-generated texts

05 January 2026 06:57

People and institutions are increasingly confronting the fallout from AI-generated writing. As demand grows to identify machine-produced text — particularly in schools and universities — researchers have examined whether AI tools can reliably spot AI authorship, often with unexpected findings.

Although some heavy users of AI tools show a stronger ability to recognise such material, this skill is far from common. Individual human assessments are often inconsistent, making human judgment too unreliable for large-scale use. As a result, AI-powered detection systems remain the preferred option for screening texts. Yet, as an article published by LiveScience explains, even AI systems struggle when tasked with identifying content produced by other AI models.

At a basic level, AI text detection follows a straightforward process. A piece of writing of uncertain origin is fed into a detection tool — typically another AI system — which analyses the text and returns a score, usually a probability, estimating how likely it is that the content was machine-generated. That score is then used to guide decisions, such as whether a rule has been violated and penalties should apply.

Behind this simple outline, however, lies significant complexity. Several underlying assumptions must be addressed. Which AI systems might realistically have produced the text? Do you have access to those tools? Can they be run directly, or can their internal mechanisms be examined?

One further factor is particularly crucial: whether the AI system that generated the text intentionally included signals designed to make later identification easier.

These signals are known as watermarks. Watermarked text appears normal to readers, but contains subtle embedded markers that are invisible to casual inspection. With the appropriate key, these markers can later be detected to confirm that the text originated from a watermarked AI system. This method, however, depends on cooperation from AI developers and is not universally implemented.

A common strategy is to use AI systems to detect AI-generated writing. In this approach, detection is treated as a classification problem, similar to filtering spam emails. After training, the detector evaluates new text and determines whether it more closely resembles examples of AI-generated or human-written content it has previously encountered.

This learning-based method can be effective even when little is known about which AI tools produced the text, as long as the training data includes outputs from a wide range of models.

If the relevant AI tools are known and accessible, another approach becomes viable. Rather than training a separate detector, this method searches for statistical patterns tied to how specific models generate language. Some techniques examine how likely a given AI model would be to produce the exact sequence of words in a text. An unusually high probability can indicate that the model itself generated the content.

When text is produced by an AI system that includes watermarking, the challenge becomes one of verification rather than detection. Using a secret key supplied by the AI provider, verification tools can check whether the text aligns with outputs from a watermarked system. This method relies on external information, not just clues within the text.

Limitations of detection tools

Each category of detection method has inherent weaknesses, making it difficult to identify a universally superior solution. Learning-based detectors, for instance, depend heavily on the similarity between new texts and their training data. Their effectiveness declines as language patterns shift or as newer AI models emerge.

Statistical approaches face different obstacles. Many depend on assumptions about how particular models generate text or require access to model-specific probabilities. When models are proprietary, frequently updated, or unknown, these assumptions can fail, limiting real-world usefulness.

Watermarking avoids some inference challenges but introduces reliance on AI vendors and only applies when watermarking is enabled.

More broadly, AI text detection has become part of an ongoing arms race. Detection tools must be accessible to be effective, yet that openness allows creators to design ways around them. As text-generation systems advance and evasion methods improve, detectors are unlikely to maintain a decisive advantage.

By Nazrin Sadigova

Caliber.Az
Views: 68

share-lineLiked the story? Share it on social media!
print
copy link
Ссылка скопирована
WORLD
The most important world news
loading