Make a Typo or You’re AI. A Linguist Explains How to Recognize Bot Text.

Image showing Human Creativity Meets AI Imperfection

The Paradox of Perfection: Why Human Flaws Now Signal Authenticity in the Age of AI

In recent years, a worrying trend verging on paranoia has emerged in the digital world. We’ve reached a point where, due to the rapid advancement of artificial intelligence (AI), basic linguistic correctness is almost seen as incriminating evidence in online ‘investigations’ into AI-generated content. This phenomenon forces human writers to grapple with a new dilemma: how to appear authentic when perfection is increasingly equated with artificiality.

Correctness Mistaken for Lack of Authenticity?

It started subtly. For me, AI inadvertently ‘stole’ the en dash. How can an algorithm take away a punctuation mark from a human? Let me explain. This seemingly minor detail perfectly encapsulates the absurdity of the generative AI era we now live in – and write in.

The en dash (–), functioning as a longer hyphen, is my preferred choice for clarity and aesthetic flow. While a comma is often effective, it can sometimes disrupt the rhythm or visual appeal of text. However, large language models (LLMs), trained on vast datasets of correct grammar and punctuation, began inserting en dashes almost everywhere. Consequently, this elegant punctuation mark has become a ‘red flag’ for discerning internet users, leading many to label material containing frequent en dashes as ‘AI-generated.’ Online, you now find guides advising, “Want to sound human? Remove your en dashes.” It makes one wonder: why should we deliberately avoid good writing practices?

Another noticeable pattern is the pervasive use of bullet points. Chatbots frequently present key information in bulleted lists. While often practical and easier for readers to digest, this has become problematic for me as an author. Sometimes, I intuitively choose continuous text, even when a list would be clearer and more helpful, simply to avoid the ‘bot’ label. I find myself rejecting a superior method of conveying information, driven by the subconscious fear of being perceived as an AI.

This apprehension stems primarily from years of observing online discussions across forums and social media platforms. There’s a prevailing belief that the aforementioned elements – the precise en dash and ubiquitous bullet points – are hallmarks of AI-created content.

When a Tool Becomes a Burden

It’s disheartening when correctness itself becomes circumstantial evidence in an accusation of deception. ‘Deceiver’ is a strong word. Yet, should the mere act of utilizing readily available tools make one feel as guilty as a fraudster? Even though the AI boom began years ago and these tools have become indispensable for many professionals, AI usage often remains a hushed topic. While hiding its use from employers is one aspect, a deeper concern lies in the personal perception of guilt.

According to a recent European labor market report, a significant percentage of workers – as high as 34% in some studies – admit to feeling as though they are ‘cheating’ when using AI tools for their professional duties. The use of technology is sometimes associated with a perceived detriment to one’s own skills, creating a sense of falsifying reality because the final outcome without AI would undoubtedly be different.

This feeling may dissipate over time, as future generations embrace AI as an everyday reality rather than a revolutionary novelty. The pace of work enabled by today’s algorithms was physically impossible just a few years ago. But for now, this anxiety can be paralyzing, at least for me.

During this transitional period, critical questions arise: Must we intentionally create content that is inferior or deviates from our ideal standards merely to pass an imposed ‘authenticity filter’? Should we attribute beauty and perfection to AI, while humans, as inherently imperfect beings, are expected to make mistakes? Are we forced to choose between prioritizing quality and risking accusations, or deliberately ‘flawing’ our text to sound more human?

Dr. Mikołaj Borkowski, a linguist from Jagiellonian University, offers a different perspective on this dilemma. He suggests, “Perhaps it’s better to weave in a joke or demonstrate a distinctive style. Errors are, of course, human, but I wouldn’t assume that LLMs write better than humans. They certainly write much, much more and more efficiently, so weaker text often displaces better but more expensive and time-consuming human-written content. The imperfection of authentic, human-written text also differs from the strange awkwardness or hallucinations that appear in automatically generated texts.”

Who Is the Real Teacher?

Humans are intelligent beings with a boundless capacity for continuous learning. Herein lies a paradox: as the internet is flooded with algorithmically generated content, we inadvertently begin to absorb it. Readers, whether professional writers or casual communicators, become steeped in this ‘smooth’ machine style and start adopting its principles themselves.

So, who is teaching whom? Is it humans teaching AI? AI teaching humans? Or are humans teaching other humans through the intermediary of machines? This complex interplay creates intricate feedback loops.

According to research by Samuel Greengard published in Communications of the ACM (CACM), people, to some extent, mimic artificial intelligence systems by incorporating characteristic phrases into their content and daily conversations. Greengard identifies complex feedback loops that blur the line between human thought and the reasoning of large language models (LLMs). He also notes that the mutual influence between AI and its human users isn’t always negative. In some cases, humans can become more competent, expressing themselves more clearly, concisely, and courteously, which also benefits their writing skills.

However, this doesn’t imply that AI style entirely replaces our own, or that it can simply be ‘copied.’ As Dr. Borkowski further explains, “The style of LLMs comprises several elements: a specific, somewhat elevated register; specialized vocabulary; smooth but often generic sentences; and content that is ‘diffused’ across an entire paragraph. Models imitate existing texts and maximize the probability of correctness, whereas our human linguistic production proceeds differently, based on the use of internalized semantic structures. Therefore, it would be difficult to adopt all the stylistic features of AI chatbots.”

The problem, however, broadens when we recognize that these characteristic features begin to ‘stifle’ the technology itself. Dr. Borkowski highlights the phenomenon of ‘slop,’ noting that it also applies to texts that are created rapidly, flood the internet, are not verified, and lead to a negative feedback loop. “Models are increasingly trained on lower-quality linguistic material, which appears to be a bottleneck for the further development of current-generation language models,” he states.

For instance, projections indicate that the volume of AI-generated articles could reach nearly 52% of all online content by May 2025, contributing to this ‘slop’ effect.

My Experience as an AI Trainer

I confess that for several months, I actively trained artificial intelligence for compensation. It’s quite probable that you, too, are doing this (for free) while using various platforms or chatbots. For example, many popular AI chatbots utilize user conversations to train their models. While this option can usually be disabled, it’s often set as a default, meaning many users may not have taken this simple step.

Returning to my experience: I corrected errors made by LLMs and created extensive instructions on what the machine should avoid. Although this was primarily driven by the pursuit of linguistic correctness, in aspects such as text readability and clarity, I inadvertently imposed my own style – my conviction of what looks good and what doesn’t. One could say that when I now use certain AI tools, I am, in a very small fraction, utilizing my own previously imparted knowledge.

The Battle for Authenticity: Beyond Words

This issue extends far beyond written words, mirroring similar trends in static and video imagery. If you use social media, you’ve undoubtedly noticed that content presenting something ‘perfect’ or unusual often garners comments like ‘100% AI.’ Of course, with generative AI so advanced by 2026 that it frequently blends seamlessly with reality, vigilance is certainly warranted. However, I can only imagine the pain for a creator who has dedicated years to honing their craft, only to have their source of pride diminished by someone suggesting they attempted to deceive the audience through AI.

The ‘Imperfect by Design’ Trend

The battle for authenticity is ongoing, and unfortunately, it sometimes comes at the cost of perceived quality. A compelling example is the analysis conducted by Canva – a popular graphic design platform – on design trends for 2026. Among their findings, a recurring theme was ‘Imperfect by Design,’ suggesting that many creators are moving away from striving for perfection and instead ’embracing human imperfections’ to make their work feel personal, raw, and honest. Virtually the same forecast emerges from Adobe, a leader in creative tools, where elements of imperfection are often deliberately incorporated into projects emphasizing authenticity.

Passion in the Era of Generation

The same mechanism impacts my greatest (to date) passion: music. For years, I pursued perfection in production, believing there was always more to learn and master. In the past year, AI tools in this field – like Suno (a music generation platform) – have experienced a renaissance, and platforms such as YouTube and Spotify are awash with AI-generated content. This ‘ideal’ is now within immediate reach. And for me, it killed my motivation.

It’s not the same process, not the same joy of creating from scratch and continuous learning. When my own diligently crafted work must compete with content generated in mere seconds, the desire to pursue this path further diminishes. There’s talk of labeling such content with appropriate AI generation information, but believe me, such a thing is incredibly easy to conceal. And if the aforementioned trend seen in writing and design also becomes prevalent here, then truly, I feel I have no place left.

When the Dust Settles, Only History Remains

A product that is ‘too good’ sometimes loses credibility in the eyes of the audience. As years pass and generative AI reaches new levels, accelerating its development, we may no longer be able to distinguish human from machine-made content. Perhaps we should accept that these distinctions might not ultimately matter and stop obsessing over how a task was accomplished?

In the end, it should always be the story that guides us – without it, it doesn’t matter who created something or with what assistance. But as long as there is room for this kind of discussion, and it holds significance for us, I will continue to defend my en dash. And if I ever stumble and create content I’m not entirely satisfied with? Well, I’ll always have a convenient excuse: ‘I did it on purpose, so you’d have no doubt that I remain human.’

Frequently Asked Questions (FAQ)


Why is linguistic correctness sometimes seen as a sign of AI generation?

AI language models are trained on vast amounts of text and are highly proficient in grammar, punctuation, and consistent style. This often leads to text that is “too perfect” or exhibits specific patterns (like overuse of en dashes or bullet points) that human writers might vary. Consequently, some readers have begun to associate such perfection with AI generation, creating a paradox where human-like imperfections are now perceived as markers of authenticity.


How does the “Imperfect by Design” trend relate to AI content creation?

The “Imperfect by Design” trend, identified by creative platforms like Canva and Adobe, suggests a deliberate move by creators to embrace human flaws, rawness, and personal touches in their work. This trend is a direct response to the increasing perfection and seamlessness of AI-generated content. By intentionally incorporating imperfections, human creators aim to differentiate their work, emphasize authenticity, and resonate more deeply with audiences looking for genuine human expression.


Is it true that humans are starting to mimic AI’s writing style?

Yes, research suggests a phenomenon where humans can inadvertently begin to mimic the “smooth” and consistent style of AI-generated content after prolonged exposure. This creates a feedback loop where AI influences human communication, and human-modified inputs then feed back into AI training. While this can sometimes lead to clearer or more concise communication, it also raises concerns about the erosion of distinct human writing styles and the potential for a homogenization of linguistic expression.


What are the long-term implications if AI-generated content becomes indistinguishable from human-created content?

If AI-generated content becomes truly indistinguishable from human-created content, it could fundamentally alter our perception of authenticity, value, and authorship. While it might democratize content creation and increase efficiency, it could also lead to a loss of appreciation for human craft, emotional investment, and unique perspectives. The article suggests that ultimately, the compelling ‘story’ itself should be the primary guide, rather than the method of its creation, but the transition period presents significant challenges regarding trust and the definition of creative worth.

Source: Original work

Opening photo: sea sae, Lyda / Adobe Stock / self-assembly

About Post Author