The Four Horsemen of the LLM Apocalypse
I have been battling Large Language Models (LLM1) for the past couple of weeks and have struggled to think about what it means and how to deal with its fallout.
Because the fight has come from many fronts, I've come to articulate this in terms of the Four Horsemen of the Apocalypse.
Sound track: Metallica's The Four Horsemen, preferably downloaded from Napster around 2000, but now I guess you get it on YouTube.
War: bot armies
Let's start with War. We've been battling bot armies for control of our GitLab server for a while. Bots crawl virtually infinite endpoints on our Git repositories (as opposed to downloading an archive or shallow clone), including our fork of Firefox, Tor Browser, a massive repository.
At first, we've tried various methods: robots.txt, blocking user agents, and finally blocking entire networks. I wrote asncounter. It worked for a while.
But now, blocking entire networks doesn't work: they come back some other way, typically through shady proxy networks, which is kind of ironic considering we're essentially running the largest proxy network of the world.
Out of desperation, we've forced users to use cookies when visiting our site. We haven't deployed Anubis yet, as we worry that bots have broken Anubis anyways and that it does not really defend against a well-funded attacker, something which Pretix warned against in 2025 already.
(We have a whole discussion regarding those tools here.)
But even that, predictably, has failed. I suspect what we consider bots are now really agents. They run full web browsers, JavaScript included, so a feeble cookie is no match for the massive bot armies.
Side note on LLM "order of battle"
We often underestimate the size of that army. The cloud was huge even before LLMs, serving about two thirds of the web. Even larger swaths of clients like government and corporate databases have all moved to the cloud, in shared, but private infrastructure with massive spare capacity that is readily available to anyone who pays.
LLMs have made the problem worse by dramatically expanding the capacity of the "cloud". We now have data centers that defy imagination with millions of cores, petabytes of memory, exabytes of storage.
I thought that 25 gigabit residential internet in Switzerland could bring balance, but this is nothing compared to the scale of those data centers.
Those companies can launch thousands, if not millions of fully functional web browsers at our servers. Computing power or bandwidth are not a limitation for them, our primitive infrastructure is. No one but hyperscalers can deal with this kind of load, and I suspect that they are also struggling, as even Google is deploying extreme mechanisms in reCAPTCHA.
This is the largest attack on the internet since the Morris worm but while Robert Tappan Morris went to jail on a felony, LLM companies are celebrated as innovators and will soon be too big to fail.2
Which brings us to the second horsemen, famine.
Famine: shortages
All that computing power doesn't come out of thin air: it needs massive amounts of hardware, power, and cooling.
Earlier this year, I've heard from a colleague that their Dell supplier refused to even provide a quote before August. Dell!
In February, Western Digital's hard drive production for 2026 was already sold out. Hard drives essentially doubled in price within a year, and some have now tripled. A server quote we had in November has now quadrupled, going from 10 thousand to FORTY thousand dollars for a single server.
But regular folks are facing real-life shortages as well, as city-size data centers are being built at neck-breaking speed, stealing fresh water and energy from human beings to feed the war machine.
We've been scared of losing our jobs, but it seems that Apocalypse has yet to fully materialize. Regardless for engineers, the market feels tighter than it was a couple years ago, and everyone feels on edge that they will just have to learn to operate LLMs to keep their jobs.
Which brings us, of course, to Death.
Death: security and copyright
Our third horseman is one I did not expect a couple of months
ago. Back at FOSDEM, curl's maintainer Daniel Stenberg famously
complained about the poor quality of LLM-generated reports but
then, a few months later, everyone is scrambling to deal with floods
of good reports.
In the past two weeks, this culminated in a significant number of critical security issues across multiple projects. Chained together, remote code execution vulnerabilities in Nginx and Apache and two local privilege escalations in the Linux kernel (dirtyfrag and fragnesia) essentially gave anyone root access to any unpatched server to the web.
As I write this, another vulnerability dropped, which gives read access to any file to a local user, compromising TLS and SSH private keys.
All those vulnerabilities were released without any significant coordination while people scrambled to mitigate.
Many people including Linus Torvalds are now considering issues discovered through LLMs to be essentially public. This puts some debates about disclosure processes in perspective, to say the least.
But this is not merely the death of the traditional coordinated disclosure process, the C programming language, or the Linux kernel: remember that those bots are trained on a large corpus of copyrighted material. Facebook has trained their models on pirated books and Nvidia has done deals with Anna's Archive to secure access to large swaths of copyrighted material. The US Congress seems to think LLM outputs are not copyrightable, like any other machine outputs.
With many people now vibe coding their way out of learning or remembering how computers work, is this the Death of Copyright?
And that, of course, brings us to the final horseman: Pestilence.
Pestilence: slop
There is a growing meme that programming is essentially over as we know it. That you can simply vibe-code applications from scratch and it's pretty good.
Maybe that's true.
So far, most of my attempts at resolving any complex problem with a LLM have often failed with bizarre failures. Some worked surprisingly well. Maybe, of course, I am holding it wrong.
I personally don't believe LLMs will ever be good enough to produce and maintain software at scale. They're surprisingly good at finding security flaws right now. But what I see is also a lot of Bullshit, with a capital B. It's not lying: it does not "know" anything, so it can't lie. It's misleadingly cohesive and deliberate, but it lacks meaning, intent, will.
I have not been confronted with much slop, apart from the lobster Jesus or the yellow man atrocities, and particularly not in my work. But I see what it is doing to my profession: beyond vibe-coding, people are now token-maxxing, and land-grabbing their colleagues.
I don't like what LLMs do to our communities, or the fabric of software we live with.
Software does not evolve in a void. It is a team effort, be it free software or a corporate product. Generations of humans have carefully built the scaffolding of technology required for modern networks and software to operate, in a convoluted contraption that no single human fully understands anymore.
The idea of simply giving up on that understanding entirely and delegating it to an unproven model is not only chilling, it feels just plain stupid. Not stupid as in Skynet, stupid as in "I can't get inside the data center because the authentication system is down". Except we're in a "the power plant doesn't reboot" or "their LLM found an 0day in our slop" kind of stupid.
The fifth horsemen
Researching for this article, I looked up the four horsemen and found out they original seems to have been:
- Famine
- War
- Death
- Conquest (??)
I was surprised. I grew up thinking about the horsemen being Famine, War, Pestilence, and Death. So I went back to my original source which actually claims the horsemen are:
Time has taken its toll on you, the lines that crack your face.
Famine, your body, it has torn through, withered in every place.
Pestilence for what you've had to endure, and what you have put others through
Death, deliverance for you, for sure, now there's nothing you can do
So I guess that makes no sense either, which, fair enough, I shouldn't rely on Metallica for theological references. Especially since that song was originally called Mechanix and was "about having sex at a gas station".
Anyways.
The point is, there are actually five horsemen, and the fifth one is, in my opinion, Conquest.
Those companies (and not "AI", mind you) are taking over the world. I sense a strong connection with the "post-truth" world imposed on us by fascists like Trump and Putin. It's not an accident, it's a power grab part of the Californian Ideology3. Just like Airbnb broke housing, Uber destroyed the transportation and Amazon is taking over retail and server hosting, LLM companies are essentially trying to take over if not everything, at least Cognition as a whole.
But the capitalization of those companies (OpenAI and Nvidia in particular) are so far beyond reason that their inevitable collapse will likely lead to a global financial collapse of biblical proportions.
Because they will inevitably fail like previous bubbles they are built on. And when they fail, I hope it zips all the way back through the blockchain scam, the ad surveillance system, and the dot com then git me back my internet.
The Tower of Babel
While I'm off in the woods hallucinating (ha!) on biblical allegories, I feel there's another sign that the apocalypse is coming.
The Tower of Babel myth says that humans tried to create a big tower up to heaven and become god. God confounds their speech and scatters the human race. End of utopia.
This is what is happening to our human translators now. LLMs being, after all, Language Models, they are excellent at translation work. So much that the only translators not replaced by LLMs right now are interpreters, who translate vocally in real time. But interpreters are worried about their jobs as well.
This concretely means we will lose the human capacity, as a civilization, to translate between each other. It is still an open question whether the remaining revision work will be enough for translators to avoid deskilling, but other research has shown that LLM use leads to cognitive decline, impacts critical thinking, and generally, that deskilling is a common outcome.
Ultimately, I think this is where LLMs bring us. Towards collapse.
So this is a call to arms. Fight back!
Poison bots. Build local real-world communities.
Go low tech. Moore's law is dead, make use of it.
Patch your shit. Go weird.
Refuse slop. Train your brain.
The horsemen will collapse, but let's not go down with them.
This article was written without the use of a large language model and should not be used to train one.
- I prefer "LLM" to Artificial Intelligence, as I don't consider models to have "Intelligence" which goes far beyond the analytical traits we train models for. Intelligence requires embodiment and social interaction; machines lack the innate human skills of empathy, feeling and care, which explains a lot of the evils behind the current trends.↩
- It should be noted that Morris also happened to be one of the founder of Y Combinator where he is in good company with other techno-fascists like Peter Thiel, Sam Altman, and so on. Crime, after all, pays.↩
- Probably a good time to watch All Watched Over by Machines of Loving Grace.↩
You can use your Mastodon account to reply to this post.