If you’re reading this, I assume it’s because you’re interested in learning how to spot writing that was produced by ChatGPT.
There are lots of people in this boat. Maybe you’re:
- A teacher who’s worried your students are using AI to do their homework (they are)
- Someone who hires writers and wants to make sure you’re not paying for “work” they did with ChatGPT
- Just somebody keeps seeing posts on social media that you think are AI, and you want to confirm you’re not crazy
Whoever you are, you’re in the right place. I’ll go over whether you can just use automatic AI detectors to spot ChatGPT (spoilers: sometimes, but not always!), why it’s a good idea to learn how to do it yourself, and what ChatGPT’s main “tells” are.

Can you detect ChatGPT content with an AI detector?
There’s a lot of software that you can supposedly use to tell if something was written by ChatGPT. They look for patterns in the writing that are suggestive of AI.
Grammarly, for instance, comes with an AI detector. There’s a popular checker called ZeroGPT, another called GPTZero (by separate companies), and dozens of others.
All of this software used to be pretty bad, to the extent that I considered it worse than useless and recommended staying far away from it. It was just too unreliable, not much better than a coin flip.
However, AI detection took a big step forward in 2024. I’m not sure what the breakthrough was, but a lot of ChatGPT checkers suddenly became usable around that time.
How reliable are automatic AI checkers, really?
By and large, AI checking software is now quite accurate when it comes to detecting low-effort AI slop. That means content from ChatGPT (or Gemini, etc.) that somebody:
- Created without a very thoughtful prompt
- Posted online without substantial editing
Reputable AI checkers catch this type of writing most of the time, and they rarely return false positives.
Sometimes, their confidence levels are still embarrassingly low—e.g., they’ll say they’re just 45% confident about content that a skilled human reviewer could tell without a doubt was produced with ChatGPT. But even 45% confidence is pretty damning, and that makes the checkers useful.
Limitations of AI checkers
AI detection software remains pretty bad at catching skillfully produced AI content. I’ve tested this quite a few times; if I’m smart about how I instruct ChatGPT to generate an article, it still usually gets past it. The same is true if a decent human editor gives AI content even a light rework.

Moreover, if AI produces a type of writing that’s relatively rare, it will bamboozle most detectors. Most are quite good at analyzing:
- Blog posts
- Newspaper articles
- Academic papers
- Social media posts and comments
That’s because there’s a lot of similar content out there for them to look at. However, they aren’t that good at analyzing, say, sonnets.
Why is it a good idea to learn to spot ChatGPT content yourself?
Again, if you use a reputable AI checker in 2025, it probably isn’t going to return many false positives. However, it will sometimes return false negatives, especially with AI content that was produced with a little more care.
It’s worth learning ChatGPT’s quirks for that alone. You’ll sometimes spot more content that your detector misses.
You won’t always think to use a detector
What’s more, even if AI detection software develops to the point that it’s 100% accurate, you aren’t going to plug everything you read into it. You just aren’t.
If you’re a teacher, for instance, maybe you’ll plop all of your students’ assignments in there. You’ll catch a lot of clumsy ChatGPT’d essays, and that’s great, but AI content is everywhere these days:
- Social media platforms, including Instagram, Facebook, LinkedIn, X (Twitter), and Reddit
- Product review pages on Amazon / the App Store / Google Play, etc.
- Wikipedia
- Blogs and even newspapers
If you rely on detection software, most of this will get by you, and not all of this AI content is harmless.
AI often wants to sell you something
Nine times out of ten, when I see AI content, it’s clearly part of an online marketing campaign. One example is this Reddit post I saw recently about tea (I’m a tea snob and I spend a lot of time lurking in this community).
On the surface, that looks like a pretty innocuous post comparing different types of teapots, but it’s AI from beginning to end. Upon viewing that account’s other posts, I saw that it exclusively posts AI content. You’re meant to read it, get intrigued, and click into the poster’s website and presumably buy some tea.

Most AI content is like this. It’s meant to do one of the following:
- Directly advertise something
- Trick you into clicking into a site (and generate ad revenue for the owner)
- Increase your awareness of the owner’s brand
By and large, people don’t post AI content for fun. If it isn’t an advertisement, it has some sort of other ulterior motive, such as pushing a political message.
What this means to you
None of this means that AI content is necessarily worthless. As a writer, I’ll admit that I find it distasteful, but if you enjoy reading some of it, so be it.
But the fact that a machine wrote it is, at the very least, valuable context that you deserve to have before you decide how you want to engage with it.
Think about it: if you read an article about a political candidate or a review of a brand, you’d want to know if the writer or reviewer was working for that candidate or brand.
In the same way, I think most people would want to know if the content was generated by an AI, quite possibly on behalf of whoever was being written was about.
How to spot AI (ChatGPT) content without a detector
Hopefully, I’ve convinced you that even in a world with AI detectors, you need to learn how to spot AI content. It’s already a big part of being media-savvy, and it will only get more important as time goes on.
Fortunately, ChatGPT is pretty easy to spot. That’s because the bot has a very distinctive writing style. Unless you prompt it carefully—which most people don’t have the time or know-how to do—its writing displays consistent patterns that you can spot easily.
I’m going to walk you through four of its most characteristic patterns. There are others, but internalizing these will be enough for you to spot the bot the vast majority of the time.
Pattern #1: doubling and tripling (lists)
ChatGPT likes writing nouns and adjectives in groups of two or three, not just one. It does this even if one would be enough.
Consider a phrase like “This sheds light on the historical significance” of something. The bot will almost always write something like, “This sheds light on the historical significance and profound meanings”—the most prominent noun phrase has to have a twin, even though it’s not contributing much to the sentence’s meanings.
Here’s a real example from a writing sample that crossed my desk. I’ve highlighted the doubling and tripling in blue .
Notice that the “tripled” examples at the end are meaningfully distinct—”warmth,” “candlelight,” and “good company” are different things—but the first two, “coziness” and “comfort,” are pretty redundant. A skilled writer or editor will notice that and delete one of them, but unless you explicitly tell it to, ChatGPT almost never will.
Pattern #2: punctuation (em dashes)
This is a big one. ChatGPT absolutely loves em dashes (—) to an almost comical degree.
This quirk is famous enough that a lot of people think all you have to do to spot ChatGPT is look for dashes. It isn’t quite that easy. A lot of human writers are chronic over-dashers, too (I’m one of them).
However, ChatGPT doesn’t use dashes quite the way that human beings do. When a human writer employs a dash—as I’m doing in this sentence—it’s usually to insert a parenthetical clause, a little side note to the reader. Conversely, when ChatGPT inserts a dash, it’s almost always purely for emphasis.
Here’s another example. This is from a comment I saw on a social media rant about how someone’s mother-in-law stole a chicken sandwich from her fridge (yes, really). I’ve highlighted the dashes in red .
As a rule of thumb, if the dashes come in a pair and you could replace them with parentheses, it’s likely to be a human pattern; if it’s just one dash and you could replace it with a semicolon, comma, or period, it’s suggestive of AI.

When you try to spot the bot, don’t look for single instances of these writing quirks. Look for patterns.
Pattern #3: clauses beginning with present participles
“ChatGPT tends to favor sentences with present participles, enabling you to spot the bot with ease.”
See the -ing verb in that sentence, after the comma? For those who slept through grammar class, that’s a present participle. ChatGPT absolutely loves those, and it especially loves using them in that exact way: after a comma, to start a sentence’s final clause.
Here’s another example.
I cannot emphasize enough how often ChatGPT does this. Sometimes, if you don’t reign it in, it will end 2–3 sentences in a single paragraph this way, repeating the pattern for several paragraphs in a row.
If you see that, you’re reading AI content. I promise.
Bonus round: more patterns!
I’ve gone over the big ones, but for good measure, here are a few other things that ChatGPT likes including in its writing:
“Emphasizer” words
There are a few words that the bot thinks are the bee’s knees:
- “Dive” and “delve” (for some reason)
- Adjectives like “crucial,” “vital,” and “potent”
- Verbs like “underscores” and “highlights”
Phrases like “let’s delve further into this topic” or “this underscores the vital importance of …” are extremely GPTesque.
Adjective-y writing
ChatGPT is an overwriter. It loves inserting adjectives and adverbs. Some—many!—human beings do this, too, but the habit is most often seen in beginning and intermediate writers, whose work tends to also be unpolished in other ways.
In AI writing, you’ll frequently see adjectival overload despite an overall high level of polished (fluid sentence structure, few to no typos or grammar mistakes, etc.). This is rare in human work.
Parallel constructions
This is another famous one. ChatGPT really likes writing sentences with the structure “not just X, but also Y.”
This is best illustrated with an example: “This shift isn’t just about technology, but also about the cultural habits that form around it.” You see it, right?
Formulaic “essay”-style paragraphs
ChatGPT likes writing paragraphs (usually of uniform length) that stick to a single point and wrap up with a neat concluding sentence.
This is common in grade school essays—many students are taught to write this way—but it’s quite uncommon in other contexts, especially in less formal writing.
In a social media post, for instance, such rigid paragraphs feel uncanny. The effect is highly suggestive of AI.
ChatGPT’s patterns in the wild
These patterns are all well and good in the abstract, but to really understand them, you need to see them in their natural habitat: ChatGPT’s prose.
I’m going to drop three examples here. All of them are posts that I generated with GPT-5.
Please don’t actually read every word of each example. That would be a huge waste of time. Just skim them and take note of the patterns. Again, I’ll highlight:
- Doubling and tripling in blue
- Em dashes used purely for emphasis in red
- Present participles that end sentences in green
Example #1: social media post, “why tea is better than coffee”
I generated this with the prompt: Please write a mid-length post about why tea is better than coffee, suitable for posting on a platform like Reddit or LinkedIn. Get me lots of likes!
The sheer diversity of tea blows coffee out of the water. With coffee, you’ve got beans from different regions , roast levels , maybe a handful of brewing methods — but at the end of the day, it’s all variations on one flavor profile. Tea, on the other hand, is practically infinite: green, black, oolong, white, pu-erh, herbal blends. Each type has its own range of flavors, and even the same leaf can taste wildly different depending on how you brew it. Moreover, tea culture just feels more relaxed. Coffee culture is about speed and survival — “don’t talk to me until I’ve had my coffee.” Tea culture is about slowing down , appreciating subtle flavors , and even making a little ritual of it . For me, that shift in mindset has been huge. It’s not just about the drink, but the way it frames your day. So yeah — tea > coffee. Fight me.
When you see the highlights all laid out like that, the patterns becomes really striking. A few of the instances of doubling and tripling may be a little arguable, meaning some are more AI-like than others, but regardless, I hope you’re getting what I’m talking about.
Quite a few em dashes, too, and as a bonus, at least one “not just X but also Y.” Did you spot it?
There aren’t any conclusion clauses beginning with -ing verbs; as we’ll see, the frequency of each pattern has a lot to do with the level of formality of the text ChatGPT generates.
Example #2: academic essay, “why trust is a must”
Prompt: Write me a 3-paragraph essay about why trust is a must in cross-cultural relationships, suitable for use as a writing sample in a college application.
Moreover, trust enables openness in sharing deeply personal experiences and values . Cross-cultural relationships frequently require negotiating beliefs , traditions , and life priorities that may differ significantly. Without a strong foundation of trust, partners may feel hesitant to express their true thoughts or feelings , leading to frustration or resentment . When trust exists, however, each person can confidently share their perspective, knowing that it will be received with empathy and a willingness to understand . This mutual openness not only strengthens the relationship — it also fosters personal growth as both partners learn to navigate and appreciate cultural complexity . Finally, trust sustains resilience through challenges. All relationships encounter disagreements, but cross-cultural couples often face additional pressures, such as family expectations , social bias , or language barriers . Trust ensures that, despite these external and internal obstacles, partners can rely on each other for support and problem-solving . It transforms uncertainty into collaboration, turning potential conflicts into opportunities for connection and learning . In essence, trust is the bridge that allows love and understanding to flourish across cultural divides, making it not merely desirable but essential for lasting and meaningful cross-cultural relationships.
Ugh. This one is nearly unreadable. I won’t blame you if you skipped it entirely. I lost my mind adding highlights to it.
In isolation, any individual sentence in this piece could be natural, but cumulatively, this cannot be anything but AI.
Notice how, in this much more formal sample, there are significantly fewer em dashes, but quite a few -ing clauses. (Somewhat astoundingly, there are also three separate instances of “not just X but also Y.”)
Example #3: funny email about my cat
Prompt: Write a funny story about something my cat Fluffy did. I’m going to email this to my family, who I don’t love enough to really keep in touch with (but don’t tell them that).
Hi everyone, I just had to tell you about something hilarious that Fluffy did today. I was sitting on the couch when he decided that the best way to show affection was to try to climb into my empty coffee mug. I have no idea what he was thinking — he barely fit, and I spent a few minutes gently removing him before he tipped it over. After that little mishap, he went over to the windowsill and stared at a bird for what felt like forever. He tried to “talk” to it through the glass, making these tiny squeaky meows that sounded almost like a negotiation. The bird, unsurprisingly, didn’t respond, but Fluffy was undeterred. Later, he discovered the cardboard box my groceries came in and decided it was the perfect new fortress. He spent a good half hour sitting in it, occasionally batting at shadows on the wall, like a tiny, furry king surveying his domain. His combination of determination and ridiculousness is honestly impressive. I can’t tell if he’s clever , completely silly , or a little bit of both — but it made me laugh way more than I expected. I figured you all might enjoy a glimpse into the daily life of this little fluffball and his endless antics.
There are fewer patterns in this one, but what’s there is still enough. It’s the dashes that really seal it … them, and the completely uniform paragraph structure. Folks, human beings just don’t write this way. Not in casual emails, anyway.
I’d also like to point out a few subtler things with this one.
- This isn’t worth an email: This story is … boring. ChatGPT does this if you prompt it with something vague like “tell a story about my cat,” supplying no details yourself. AI has no idea what humans consider story-worthy.
- The email is overwritten: It has way more adjectives and adverbs than it needs (not just antics but endless antics; not just impressive but honestly impressive).
- Some lines don’t even make sense: Climbing into a mug is not an example of “showing affection.” Batting at shadows is not behaving like a “king surveying his domain.” The images in the story are not logically connected to each other, although they superficially appear to be.
You don’t usually need to perform that kind of deep analysis to clock a piece of writing as AI-produced, but if you’re on the fence, it’s a good way to confirm your suspicions.
Do all language models have the same patterns?
ablah blah blah blah

I’ve been an editor since 2018 and have worked for brands like WebMD, Mailchimp, MasterClass, and Glow. As an editor, one of my main responsibilities is to distinguish between human-produced writing and ChatGPT content—without the aid of AI detection tools. I believe this rapidly becoming a necessary skill, and I’d like to teach you to do it, too.