GenAI and the Tyranny of Big Hyphen

Rant time.

If you’ve been on the internet at all recently, you’ve probably noticed a lot of Generative A.I. (GenAI) guff.

Once, when I were a lad on the internet, all you could see were lush green fields of poorly-written clickbait, lovingly churned out by a human being. One whose soul and spirit died long ago, reducing them to an inane observation-spouting automaton, admittedly, but a fellow meaty organic, at least.

Now those fields are gone. At best, they’re withered wastelands. At worst, they’ve been replaced by the sheer faces of the grim, grey tower blocks that are poorly-written clickbait written by machines. Bland drivel and semantic noise are no longer the honoured preserve of human endeavor; TwitteX, Facebook, Reddit, tabloid news, and so on. Man-made brainrot risks extinction.

This has made a lot of people really miffed.

Thankfully, an unnamed hero – the John Connor of our time, I suppose – has discovered a 100% infallible way to identify GenAI written content. Obviously there’s then no attached impetus to do anything about it, because that would be absurd. If we all stopped visiting those sites or started creating our own lovingly crafted content for our fellow man, that’d just help solve the problem. What this is all about is complaining incessantly about the problem, doing nothing to fix it because you’re too busy harvesting social validation by excitedly denouncing it.

As it turns out, these stupid idiot robots are very easy to spot and will remain so until they figure out how to use the literary equivalent of that liquid metal. ChatGPT-1000, if you will. Until then, we have the upper hand, thanks to this One Weird Trick Transcendent Techno-Singularities Hate.

It turns out that you can always, without fail, most definitely spot GenAI content if it uses one of – and most especially both of – the following things:

  • Good grammar
  • A completely new and invented symbol known as the Em Dash

If you’re an angry illiterate who resents anyone more coherent than the bog-dwelling misanthropes whose weird punctuation-less, Original Research rants constitute the only non-advertising content on Facebook, now is probably a safe jumping off point.

Know your enemy (because it’s probably you)

I like to think that my readership are generally well-educated, rational types, who drop in for the vicarious thrill of seeing someone else have a vituperative tantrum on their behalf. Possibly – but not definitely – with a smattering of amusing hyperbole.

Which means many of you may be thinking something along the lines of “but isn’t GenAI trained on human-created content and instructed to follow grammatical rules?”

Yes. That’s exactly how it works. But, for those with a slightly less firm grasp on this, a quick explanation.

GenAI writes how it does because it has learned from how we write and is typically more diligent in using good grammar because it is hard-coded to do so. It is (usually) forced to follow good grammatical form and it uses the Em Dash because the real world data sets it was trained on included many uses of the Em Dash by actual humans and this is where it learned how to write.

It’s exactly this trick that makes GenAI – or, more specifically, Large Language Models (LLMs) – so impressive, because they seem intelligent. They aren’t. They never have been. They’re just uncannily good at predicting what we would do in a given situation, so long as that situation is “add symbols or words to something that will make sense to people who know how to read.”

But it’s only impressive because we’re basically vain monkeys who equate “looking like us” with “being impressive”. Or, in this case, with being intelligent.

The reason it looks like us because we made it look like us and, when doing so, decided to make it look like us at our Graduate Dissertation best, rather than at our Nazi Tweets worst. But, in many senses, GenAI is just fancy autocorrect. It’s predictive text on steroids. It isn’t magic, it isn’t sentient, and it doesn’t understand or know anything.

That might sound like I’m suggesting it isn’t a really clever bit of technology. I’m not. It’s an innovative evolution of technology that has been around for years, but that is all it is.

I’m also not saying that GenAI content isn’t an issue. It’s filling the internet with mediocre, stagnant, uninspiring blather in the hope of driving more clicks and therefore more advertising revenue for soulless corporations that view humanity and its sum creative output as nothing more than a commodity to be monetised. That’s our job.

And it’s undermining democracy and opening the world up to violent oppression by sociopathic technocrats, making it impossible for any individual to have the slightest clue what is going on beyond the widely-broadcast fact they should be fucking livid about what They are up to and that only the Grind The Electorate Into Nutrient Paste For Our Insectoid Overlords Party can do anything about Them. Which… is also our job.

But you get the point: if the world is going to be ruined, then we should be the ones that get to ruin it. It isn’t some disposable plaything for Clippy with rizz.

How to influence yourself and not make friends with people

Being able to identify fact from fiction, separate human-creation from GenAI autofill, and generally navigate an increasingly abstracted, post-consensus world is really important. But pretending that we’ve found some sort of at-a-glance diagnostic when we very much haven’t is dangerous. It’s like stuffing flowers in our pockets to ward off the miasma in the middle of an outbreak of plague. Sure, it might be nice to feel less helpless, but in pretending we’re now actually protected from deadly buboes, we’re actually more likely to get deadly buboes.

If you think you have an infallible tool for spotting AI bullshit, fake news, misinformation, or anything else, you’re far more likely to do the exact opposite of that. Being suspicious, learning nuance, and maintaining a healthy self-awareness of our own fallibility are far better tools than “my god, that Reddit post contains two semi-colons, six em dashes and an Oxford comma! No mere human could possibly grammar that well. Burn the witch!”

And when the filter you’re using also stops you listening to anything said by intelligent people with a good command of written English, things get even worse. Inconveniently, these tend to be the sort of people you want to be explaining complex situations to the masses in the first place. As opposed to, say, some ethno-nationalist incel with 18,422 posts on Quora. All of which are 48 paragraph screeds on how actually, in fact, the secret academics don’t want you to know that the CIA put nanites in aftershave to make men liberal and the holocaust didn’t happen.

I would say that though, wouldn’t I? After all, I have repeatedly used Em Dashes, so am definitively an incorporeal synthetic intelligence trying to mislead you into clicking on yet another link about 17 Times Sexy Pot Plants Shocked Boomers On TikTok. Then, when your brain is fully mush, I can sup it through my cybernetic proboscis to use as bioreactor fuel.

But I can also suggest that Marjorie Tyler Green is a shambling fuckglob of pseudo-sentient tumors. Could GenAI do that? Probably not, because everyone’s frantically trying to flog it to corporations on the grounds it won’t cause HR departments to pause their ceaseless gnawing and launch another Meatling Hunt.

Perhaps that’s the real secret sauce; we should just all be wildly abusive to one another at all times, with the maximum amount of creative swearing. But should GenAI ever learn to call a public figure a festering piss mouse, the war is already lost. Prepare yourselves for a future where the password to every enclave of the human resistance is “badger cunt”.

The truth is, these ‘tells’ aren’t actually useful at all. What they are is an easy cognitive shortcut to addressing the other real issues central to the whole discussion: paranoia and insecurity.

Fear & Self-Loathing

It’s no secret that the world has gone to absolute shrew shit over the past decade or so. It’s impossible and pointless to follow politics because everyone is lying about everything all of the time. Now that toasters can churn out Substack articles about why we should privatise bones or how the use of uppercase letters is a form of class warfare, we can’t even be sure we’re being lied to by a human.

It’s entirely rational and justified to be paranoid about all this. We should have been a lot more paranoid about it twenty years ago, when we handed the keys to the noösphere to social media and assumed they’d use them responsibly.

However, it isn’t entirely rational and justified to act as an amplifier for blithering gibberish, sans evidence of any kind, simply because, when you heard it from someone with really good engagement rates on LinkedIn, it made a vague sort of sense.

The fact it made you feel better equipped to navigate the dizzying terror of modern life has nothing to do with whether it’s true or not. Something isn’t true because it makes you feel better about the world. Or yourself. Just because it isn’t how you do things, doesn’t mean it is bad. That’s not a personal failing, by the way – I’m not pointing the finger and laughing at the morons. We are all neurotic tangles of cognitive biases and questionable evolutionary quirks, including one which makes us more disposed to believe things that make us feel good.

That’s why self-awareness is such a valuable trait to foster.

But the people who’re spending altogether too much time online accusing anything more than half-literate of being GenAI content presumably doesn’t think their own writing seems like that. That’s how differentiating between things works. For example:

  • I can tell my chair isn’t sentient because it doesn’t show any of the signs of sentience I’d associate with myself, who I know to be sentient because a French transvestite told me so.
  • I can recognise an American by their accent, which sounds different to my British accent.
  • If I’m assaulted by a wobbling fleshsack of hungry excrement from the howling nightmare realm beyond reality, I can tell it isn’t a Tory MP because it has ventured into this reality.

And so on. I can only assume these GenAI Detectorists don’t frequently use the Em Dash and their posts aren’t exemplars of grammatical purity. Their observation boils down to the accusation “content that doesn’t seem like mine is either written by a literate person or a GenAI.” And they somehow do this whilst remaining apparently unaware of the implication of their reasoning.

Which brings us on to the other, less legitimate issue: insecurity.

A lot of people seem to intensely dislike anything that makes them feel inferior in any way. They don’t want there to be a point of comparison that they might have to live up to. It’s fairly understandable, but again: self-awareness. Important. Have some.

However, many, many people do not, so we end up with the current climate of rising insanity. A world where NASA is a hoax, because I’m a moron who thinks the Earth is flat and if NASA’s achievements were real I’d have to deal with a lot of very inconvenient evidence to the contrary. I want my paragraph-less raving about how 5G is melting the Great Ice Wall and releasing the demons trapped within to be on equal footing with, say, actual footage of the Earth being not flat.

We all have these narratives in some form or another, although typically on a far lesser, more personal scale. Maybe a mild resentment at a more successful friend, avoidance of spending time with a more socially gracious sibling, or jealousy of someone who you perceive as possessing greater talent in a domain you care about. Like if they can fit two whole scotch eggs in their mouth at once or something. These are normal.

What isn’t normal is latching onto any old bollocks you hear out in the wild, just because it makes you feel better about yourself. Be that the internet, the pub, the work place, the Oval Office, or wherever it is; just hearing it and having a feeling that maybe the world is a bit easier to understand, and your relative place in it just went up a notch, is not a good reason to believe it.

Pharmacology is difficult. People study for years to become moderately well-versed in one tiny subsection of it. You shouldn’t expect to understand it all from 20 minutes on Wikipedia and the fact you don’t isn’t evidence that vaccinations are a lie. Physics, engineering, and mathematics are vast, incredibly complex fields of study; it’s totally fine to not understand how the hell we put people in space, but that lack of understanding doesn’t mean NASA is a front for the Illuminati.

And so on. Which makes it all the more depressing that we’re now at a stage where people are saying good grammar and the use of punctuation is a sign of something being written by an artificial intelligence. Because the alternative is to recognise that perhaps written communication isn’t their forte. That they are not necessarily the blazing supernova of eloquence and repartee they’d like to imagine themselves. Which, just to be absolutely clear, is fine.

Technology has consistently been developed to do things better than we can. From cars to calculators, we’ve filled the world with tools that are better at A Thing than we are. That doesn’t mean people can’t still be good at whatever That Thing is, even if you personally have caved to laziness and become increasingly useless at it.

But, that off my chest, I’d just like to reiterate the core point of all this: we can’t reliably identify GenAI simply by looking out for anything more exotic than a comma.

And, as a bonus thought: perhaps if we spent less time hate-scrolling social media in the hopes of defrauding our dopamine system, and a bit more time actually speaking to real people – with brains and passions and interests and feelings – we might find it easier to differentiate between human and robot.