Follow ZDNET: Add us as a preferred source on Google.
ZDNET’s key takeaways
- AI models follow certain structural rules when generating text.
- This can make it easier to identify their writing.
- They tend towards contrasts, for example: “It’s not X — it’s Y.”
The past few years have seen a flood of AI-generated text wash over the internet. As the models behind this text improve, so too does their ability to imitate the intricacies of human speech; at the same time, our methods for detecting it have been improving, and there’s been an active online dialogue about some of the most common quirks of AI-generated text.
Historically, one of the more well-known tells of ChatGPT, for example, has been the chatbot’s fondness for em dashes. It would often punctuate its sentences with em dash-bounded breaks to emphasize a point — as if a longer, more breathless sentence would have a more potent effect on the reader — peppering in supportive arguments mid-sentence in a way that to some users feels antiquated and mechanical — but to a computer trained on a vast quantity of training data littered with em dashes is totally normal…you get the idea.
Also: I’ve been testing AI content detectors for years – these are your best options in 2025
Following complaints about ChatGPT’s em dash proclivity, and a commitment to build models that could be more easily customized to the preferences of individual users, OpenAI CEO Sam Altman announced in a X post last month that ChatGPT would stop using those punctuation marks in its outputs if prompted to do so. While many users probably celebrated the news, it also meant that writing generated by the chatbot would be that much more difficult to detect; bad news for teachers, many employers, and anyone else for whom it’s important to have a reliable means of distinguishing human- from AI-generated text.
(Disclosure: Ziff Davis, ZDNET’s parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
Thankfully, there are plenty of websites which provide exactly that: ZeroGPT and Grammarly’s AI Detector, for example, both allow you to simply paste in a piece of writing (a suspicious text, for example), click a button, and the tools will automatically scan the text for signs of an AI origin and tell you the results; they’re not completely foolproof, but they’re a generally reliable way for catching some of the more conspicuous giveaway signs.
Five red flags to look out for
If you’re not interested in adding even more websites to your daily routine, though, or if you’re just looking to sharpen your ability to quickly ID a piece of AI-generated writing (a skill definitely worth honing in this day and age), then there are several linguistic patterns you can keep an eye out for.
Here are five telltale indicators of AI-generated writing:
1. Rule of threes: Human writers often present arguments by citing three examples, which seems to scratch a deep psychological itch in our brains: once is a fluke, twice a coincidence, thrice a pattern, to paraphrase an old saying. AI does the same thing, but to a ridiculous extent. Triplets show up often in its outputs. For example, consider this opening paragraph from ChatGPT after I asked it to present its best possible argument that the Earth is a flat disc (to be fair, it first clarified that its response was purely “a rhetorical exercise” and that it wasn’t explicitly endorsing the Flat Earth Theory):
If the world were a globe spinning at thousands of miles per hour, its curvature should be directly observable in everyday life. Yet long-distance observations across lakes, deserts, and oceans repeatedly show objects—ships, shorelines, buildings—appearing exactly where they should if the surface were flat. Engineers routinely account for curvature in theory, but in practice, surveyors, pilots, and construction crews rely on level measurements that behave as though Earth were an extended plane. The simplest explanation for this consistency is that the ground beneath us is not curving away at all.
That’s three sets of three in a 90-word sentence.
2. Arguments framed in contrasting language. Chatbots will also often try to drive home their points by first offering a counterpoint. Consider this part of a response from ChatGPT when I asked it for a sales pitch from a company that offers commercial flights to Mars: “Mars isn’t just a planet — it’s your next unforgettable destination.” I can’t imagine any human writer ever even thinking of writing the phrase “Mars isn’t just a planet…”
3. Monotonous sentence structure: AI-generated writing also tends to be uniform: while there is, of course, some variation, sentences are often of a roughly equal length, making paragraphs that feel a little too clipped. Human authors will usually try to add a little variety by varying their sentence/paragraph lengths. Try reading a suspicious piece of writing out loud: if it sounds robotic in its cadence, it very well could be.
4. Short and unnecessary rhetorical questions: AI-generated sentences aren’t always the same length, though. Chatbots will often, for some reason, sprinkle in very short (one- or two-word) questions. Think: “And honestly?” This showed up when I recently asked ChatGPT for a cheeky summary of my personality based on my conversations with it over the past year. And when I requested a funny description of the Rocky Mountains, part of its response was: “Wildlife? Oh, they’re just casually judging your snack choices from the sidelines — moose with disdain, marmots with sass.” It wouldn’t make sense for a human writer to start that sentence with a question, since no one had asked about wildlife. It’d be much more straightforward to simply write: “The wildlife is just casually judging…”
5. Constant hedging: Whereas human writers tend to try to home in on a specific point, chatbots tend to use indirect, hedging language and qualifiers (“This could mean…” or “maybe…”), which often gives the impression that it’s providing a nuanced and balanced assessment but actually ends up as a vague, meandering response.