Every aspect of our life is under the attack of LLM (Large-Language Models) and LMM (their multimodal cousins that have another layer of translation into audio or visual data, for example), crawling towards us as “GenAI”, “Agents”, “AI”, and all other obfuscating, hype-driving names fintech and venture capital ghouls pitch.
Our art-form is not an exception. People have been using LLMs to run games for themselves rather than use the many, many, many purposefully designed solo games available — a trend that Hasbro wants to capture, because of course they do. White Wolf has been resurrected yet again, and this time using LMM-slop art, while being brazen about their integration of LLM in their production line—despite later claiming not to use these tools for production. Indies are no exception, with a worrisome number of games involving LLMs at some stage or another of development.
However, we stand uniquely ill-prepared to discuss—much less defend—against this assault. All the arguments I have seen against LLM presence in AI are deeply flawed, as they do not talk about what LLMs actually are and the violence they facilitate. No, it is not because a robot has no soul, so one has to be human to contribute to cooperative storytelling. No, it is not because the art they make is bad; actually, it may be better than Greg’s but you should still hire Greg, think of Greg. No, it is not somehow a heresy if a computer is rolling 2d6 goblins instead of GM Kyle.
So I wrote this article to present a somber overview of what LLMs actually are, what they do, what they do not do; what the enemy looks like, its threats, its weaknesses, and why it must be destroyed.
It is not AI
LLM stuff is not Artificial Intelligence, it is also not Artificial or Intelligence. There is no labor being done by anything sold as an artificial laborer, there are no labor-saving technologies being deployed, nor do the damned things even do anything remotely close to think. And there is always a guy inside each of these.
Here is how they function:
A prompt is broken down into tokens, each assigned a different number ID. The LLM sorts through all previous tokens. Then, it has access to training data, massive servers with all sorts of data, associated with a certain token—and the probabilities of it being relevant. The number-IDs are subject to various normalizations, adjusted by all the other number-IDs, aka: “the context.” This gives it various probabilities of what piece-meal piece token is more likely to be next, then adds it to the response. It keeps going over the number-IDs in inputs and outputs, keeps making these calculations, and getting the associated-output (word, pixels, etc) more likely to come next. Adds it in, catalogues the existing number-IDs, repeats calculations. Predicts more likely follow-up, adds it… 1
Yes, this is basically a scaled-up autocomplete. Would you close your eyes and communicate only through what your phone’s autocomplete function sends? Would you trust that? If you wouldn’t, you understand the problem with LLMs.
They always work like that. No matter how advanced the model, no matter the scaling up, no matter how many tokens are involved, no matter the amount of training data, it does not think nor it produces anything. They have always operated and will keep operating as cumbersome autocompletes.
Sit with this knowledge. Think about what this is presented as and how it is actually used. Now, this kinda of processing is powerful—in the right context. For example, being trained in massive imaging data from collected tissue samples, a model could be trained to correlate tiny changes undetectable by human eyes or markers, being able to predict pro-oncologic cells, to properly identify the real size of a tumor, etc. However, when was the last time you saw LLMs sold like that? Probably never, actually, because nobody doing that would present it as “AI”.
This is something that has use-cases as I previously mentioned. Very specific, very narrow, tailored use-cases. It would be a pretty good “dumb” tool, a dynamic statistical-normalization gun.
The real costs are hidden
You may be thinking, “well, that’s a highly inefficient way to do computing, much less anything else.” Yes, that is because it is. It is like a production line for the soul, a factory floor broken down into numbered piecemeal data spread across the world.
None of these companies makes any money. They promise the Singularity, AGI, the computer overlord. They claim to replace all labor and production costs and are evaluated for the when those claims become real.
They are heavily subsidized—in all ways an industry can be. Energy investments and infrastructure, with projects needed to save the planet deemed once “impossible” implemented within a couple of years to feed the LLM boom. “AI” industries benefit from legal protections and “benign” neglect, as their rampant crimes and antisocial behavior is deemed “necessary” for the development of these “miraculous” technologies. They have the human resources, our brainpower, the intellectual driving force of society, all diverted to fine tune LLMs. Let’s not forget the literal subsidies, investments, and the incalculable value of datasets and training data. And of course, the utter destruction of the environment, local and global. All done to watch over these machines, these autocompletion algorithms strapped to a million calculators.
And all of this barely makes a dent in their operational costs. Because of the way LLMs work, they will always be expensive and inefficient. All you can do is scale this up, and with this you will run into rapidly clogging bottlenecks. For more accurate results, you need to handle a bigger context with bigger tokens. You will not get much from this if you don’t have bigger databases, where the nuance between number-IDs actually produces a variety and nuance of output number-IDs. If you increase either of them, that’s more calculations that you will need to be performing in parallel just to keep the same response time. This fuels the constant race for more and more expensive chips. That’s another expansion of production and energy investment, and then you need to keep leveraging those resources into more number-ids, which means…
It is an ouroboros that thrashes as it eats itself, flattening the world around it. And for nothing. That was the genuine achievement of DeepSeek. No amount of more computing power, bigger contexts or bigger datasets of training data are ever going to change how LLMs work and/or how these outputs are indexed and presented (not made, not generated; presented). DeepSeek produces similar outputs faster, cheaper, with chips from almost a decade ago and much smaller databases2. It put a mirror against the whole “AI” grift industry and pointed out this is a silly toy with very few actual case studies and they are burning the planet and near infinite wealth pretending it is God.
It is all about labor issues
At this point, you must understand that “AI” industries are not technological industries, and that LLMs developments are not really technological advancements—or at least, not in computing.
The services they offer is novel social technologies of labor exploitation, disciplining, vigilance, alienation and control. But let’s get there step by step.
Let’s backtrack to how LLMs operate. They translate input into number-IDs, do calculations to sort through number-IDs of piecemeal data from datasets, piecemeal a probable-fitting output from those fragments.
Where does labor come into this? One obvious way to get the input and translate the number-IDs through algorithms and computation. This is what people think about when they think of “AI jobs”: data engineers, prompt engineers, AI engineers, the pipeline of production of the hardware, etc. However, this cannot even be called the tip of the iceberg of labor involved. If anything, these are the parasites inside a flea living in the hairs of the elephant of labor.
The question anyone paying any attention should ask is something along the lines of, “Wait, what is in these databases, where does it come from and how has it been broken into parts and indexed so it can be spouted out like that?” Well, what is the amount of labor required to produce every single bit of information, art, or human life in existence? That would be the accounting required to answer that question; if the content of those databases is not as close to that as physically possible, it is because there is only so much these companies can steal in a given day. Putting aside the ethics of the rampant appropriating of this data, how it is obtained, the guiding parameters for its disassembly, let’s just take it as them benefitting from this data. From this generosity of all-human-creation. What do they do with this generosity? Take and take and take and take and then take some more as they poison you.
But fine, this astronomical amount of data-labor required to produce the training data may be dismissed as “dead labor”—even if it is harvested from your ongoing life. Let’s assume it is inert, lacking the ability to grant value. Even ignoring the theft that lets LLMs plagiarize all the general intellect and culture of humanity, there is an essential labor without which LLMs cannot exist.
You could use small, consensual datasets. You could use the existing LLM-building knowledge, and marshal no new one. You could do all of this without artists and engineers.
You cannot do this without microwork, because this is what LLMs do. The processes they churn through require countless amounts of microwork. Why?
How do those number-IDs get associated to those inputs? How do the data-bits in the servers get associated with any given number-IDs? How did certain number-IDs get “nudged” closer together or apart, so that the probabilities are adjusted accordingly? That’s microwork. Without it, the LLM would use out-of-date context reading, translate into a few number-IDs and spit out a number-ID that has been indexed to it at some distant point in the past and that’s it. Any thinking and life in LLMs? It is squeezed from legions of human workers, in real time, broken into ultra-alienated repetitive tasks. Even the illusion of thinking, of it reacting to a change in the databases or in the prompts, requires warehouses across the world full of deprived humanity massaging the autocomplete plagiarism machine in real time.
The work is beyond awful. It is not even considered “work,” getting the cruel reality of microwork instead. The alienation from one’s own work crosses the human limits and into the realm of nightmare. You are doing absolutely nothing. There is only degeneration of your capacities, no value to your skills, and the tasks can barely be considered “tasks.” More repetitive than any factory labor, each task is broken down into seconds of labor, so abstracted that you have no hint, involving no conscious thought, that you speed through so you can make a dozen cents a hour. Not only is this work horrible, it pays so little that is not remotely liveable, so it preys on the most vulnerable of all humans—carceral laborers, refugees, child laborers, debt slaves, etc. Companies like Mechanical Turk and Clickworker love to settle on “death worlds” and offer “opportunities” to the un-people forced into them.3
Sigh, fine, I will translate to something pop culture. You know the work thing they do in Severance? Of picking the vibes of numbers? Yeah, that would be like the labor aristocracy of microwork. It is that bad.
That’s what happens when a LLM does things. The way “AI” thinks: breaks things into numbers, according to microwork tying different words/visual data to certain number-IDs. They go through various calculations, nudged towards or away from certain number-IDs that have been labelled into certain data-outputs in the database by other microworkers. A single prompt, broken down into millions of calculations for a machine and tens of thousands of workers across the world doing the most soul-eroding work imaginable for nothing4.
That’s the technology offered: a labor-grinding machine, a black box for mass exploitation. Everyone is deskilled: all skilled labor is stolen and put in the training data; the only remaining jobs will be microwork or competing-with-microwork-disguised-as-AI so work which has to become as much as microwork as possible5. Invisible, hidden, presenting you with the friendly face of a robot butler or your local Mecha Hitler.
They are the material, creative, living force trapped within the LLM; the chained hand moving the puppet. This is the button that can be pressed to “condition”, “censure”, or “correct” the LLMs6. Here is a practical example. LMMs have an increasingly yellow-tint, the colloquially called “piss filter”. People ignoring the microwork aspect often blame this on the training data, somehow, passively, just deciding to do that on its own—you know, the thing that literally cannot decide anything on its own and just spews out averages. However, my partner has a hypothesis that makes way more sense: the number-IDs and probability adjustment of what each color is/balancing whites ares being done by microwork that does not know this is what they are doing. In massive warehouses, tired, with old/burned monitors7, rushed through a process in which they are specifically forced to tune their brains off. Current research supports this hypothesis: analysing dataset representation, it seems its bias is the same as human designers8; if this is not coming from the side of stolen work, then maybe it is coming from the invisible work? If this bias towards orange-yellow, red-shifting is inbuilt in the way LLMs work and their bondage to microwork, you cannot dataset your way out of it. The more microwork under these conditions is done, the more this becomes normalized—and the LMM output comes out yellower and yellower. This obvious feature of LMM output is entirely a product of material conditions; so is the case for the least obvious.
No wonder every single company in the world absolutely fucking loves this shit. When you don’t get why they are putting so much money on something that just does not work, this is how the ones with firing neurons know they are buying: a way to completely cover-up material reality with “AI magic”.
It is not new
Pretty weird how so many people are just suddenly, out of nowhere, embracing LLMs, right? It just came out of nowhere and in two years is now everywhere.
Well, it would be weird if that was what actually happened.
The LLM business model is the model of the stupid post-2008 economy: high-risk, start-up nonsense and speculation, where you are throwing money in the hopes one of the companies you are backing gets a monopoly and then it can enshitify/extort for a degrading service. Airbnb, Netflix, Uber, etc.—you know the drill, you live in it.
They are just the natural evolution of this, but fifteen years into this escalation, they have upped the monopoly to “all labor” and what is at stake for not giving them money “reality itself”. Now, who can afford NOT to give them all the money they ask for and then some more? Well, someone who understands what this damned thing is.
Microwork did not start with LLMs and “AI” industries. No, it is older, having started with the gig economy. Companies like Deliveroo and etc. were already microwork companies selling microwork advertising themselves as tech companies offering technological solutions to different industries. The product has always been microwork.
But even then, you had to see the guy delivering your food. The ghost kitchens were in situ. Even with much adversity, platform workers still manage to organize, strike and even unionize. LLMs allowed the perfect black box to stop this. To break apart microwork into even smaller pieces, to decentralize so no single individual is doing anything, and to spread the labor across the world. LLMs have been used to craft an infernal box of misery—and the ability to use it to threaten labor. The final steps in a new reality of labor that has been ongoing for at least fifteen years.
This is why this is “successful” where NFTs and cryptocrap failed. And this is what success looks like for it.
What’s all dis then?
So, all you have is a mechanical Turk, a black box for deskilling labor at mass scale, built on precarization and exploitation. There’s no magic robot, it is not AI, it is an LLM. It produces nothing, it generates nothing. It does not think. You press a button, flush a toilet to overheat very expensive calculators strapped together, which then tell two hundred thousand slaves to guide it through vaults of stolen treasures ground into dust, the slaves unknowingly paste together all the stolen loot-dust into a picture of a weirdly glowing Bart Simpsons with five fingers saying Hay Cremb!a.
Now, talking of what we *actually have here* when people talk about “AI” or “GenAI”, we can talk about LLMs and TTRPGs.
So you used “AI” in your TTRPG…
… now you have to justify attaching all that to your art-form. Knowing all that, it really becomes hard to justify their use. The moment you incorporate that, at any point in the process, that’s now part of it.
Your “solo” game of Adventuconquerors? Not solo. You left your shower running so you could make twelve thousand microworkers duct tape together from old modules and free dungeons the most “average” and “normalized” Adventuconqueror Dungeon Crawl. Why do you not use a paid platform worker for paid GMing—is it because ineffectively using thousands of carceral slaves in the US, refugees in Libya and warehouses of precarious workers in Brazil is cheaper than even gig work? Why not use many of the (often free) tools and games for solo play? “I don’t want to learn another game” really an argument you think has merit here?
You don’t use it in the final product; you just use it in the pipeline of my product to save costs! Well, okay, so now the production line of your art includes this whole thing. Do the microworkers get any credit? Do you put the ecological damage in your budget balance too? Can you prevent the “averaging”, normalizing effects of a LLM? Why would anyone think you’re not exploiting your other labor when you are willing to engage in this level of precaritization and exploitation just because they are un-people spread across death-worlds?
Or you’re going like recently reborn White Wolf and openly embrace using this in your finalized product, as your heart (until caught red-handed). How, on top of the above, do you reconcile with the fact that they do not create anything? The way these things work, they cannot create, generate, anything; anything that is actually of merit is some happy accident from the thousands and thousands of microworkers touching the thing up. This is a patched-up sludge of stolen art. Why are you doing it? Why would anyone believe your art has any valor, any power to transform those subject to it, when you clearly do not think highly of any art?
You may be using it anyway…
Because of the universal embrace of these technologies by corporations, it is sometimes impossible to escape making this pipeline of suffering. For example, say you are using a service like Envato Elements. “It is okay”, you say; “I don’t use the GenAI features”. Right. But they still use that pipeline; it is as ridiculous and dismissive as it would be “it is okay using that factory with the child laborers; we buy from the floor without children.” How sure can you be that there is no contamination? Because again, the technology is not “making things”. No, the entire thing is about labor and technologies of labor. If the company is comfortable using this pipeline, it is the cudgel used against labor. If they already using LLM-pipeline, can you be sure they are not using LLM-like? If they already use microwork on something, why would they not use microwork or cannot-believe-it-is-not-microwork? How are the databases of stock images and audio indexed? And can you be sure that the databases of stock resources are not contaminated? You cannot use “only human made stuff”; all this crap is human-made: it came free with LLMs and all the microwork. All of this is human made, all of this is glued together from stolen dust by blindfolded slaves.
Many of us can never know, lacking the influence to pierce through the layer after layer of obfuscation. But if say, you would not work with a printer that pays less than minimum wage and has a toxic impact on laborers and environment, maybe you should apply the same principles to the inhuman and poisonous microwork which you may be making use of. And if you are a company ranking in millions and millions in crowdfunding, maybe you have more weight around to demand accountability from business partners—and share it.
There is also another way one may fall into using LLMs: by already “thinking” like a LLM. As I mentioned, these things are not new. Many paths have been laid before to make these not only a thing, possible and desirable. The homogenization of all culture, the “corporate copy” of all language, to have to speak for marketing and SEO, to make things that can be easily broken into pieces that can be mixed and matched, throw in a wiki — or a dataset. Maybe we find ourselves making LLM slop even when we are not using LLM; we have reduced ourselves to the agency and intelectual freedom allowed to the microworkers, allowed only to compose our words from existing databases.
2d6 goblins in the 5ft x 5ft Adventuconquista in the Frontier is no different if you are touching the dice, writing the table, or as the mangled output of twelve thousands guys forced to pretend they are a thieving robot.
From Capitalism Realism to Statistical Realism.
LLMs and LMMs have been often called the anti-printing press, but perhaps they should be better thought of as the anti-cameras. They cannot perceive, interpret, or experience the world and reality. Instead, they follow a political/artistic “movement” of statistical realism: their vast quantities of data produce reality, portray reality. LMMs are extra processes slapped on top of the usual LLM architecture, which transform tokens into images through a misapplication of heat diffusion equations. Massive clouds of data are reduced to noise, which then is reverse-engineered from heat diffusion equations to uncover a restored image. This seems to be just obfuscation and deniability; the noise does not seem to do anything. Replace it with anything else, an image-like output is “generated”. All it is doing it is drawing on already existing data, nothing more, nothing less.
To drink from this fountain is to willingly shut away the world. It is to see and feel less, to move away from reality and environment. To use these machines, to strap these engines, is to further separate people in an artform that relies so much on sharing a construed reality with others. It is to exist, instead, in statistical reality. You gouge your eyes and instead replace it with total probability—machine dementia. 9
It corrupts a beautiful opportunity
I really dislike the arguments used against “AI”, often feeding on each other and ignoring all the role of microwork, and what these technologies are. Of those, the arguments that I hate the most are those whose entire foundation is that the quality of the LLMs output does not match human output quality, and the argument to the reaction to this (“technology will improve”) that any art it outputs will be empty/soulless.
If you have got one thing from this long diatribe, it is that it is all human labor, bestowed with human quality; refusing to acknowledge it is only further erasing the theft for the databases and the enslavement to microwork. It is always “matching human art”, so the question we should ask is, “dear goddess, we are doing what to humans?”.
Same with the reaction argument. The language of improvement and productivity is always fraught, so sure, “improvement” can always happen. While LLMs cannot change their nature and are a “great” product for a bubble as they are a dead-end for actual AI10, they could “improve”. With enough chips, slave labor, and (or, as DeepSeek has shown us the more likely way, smaller, better curated databases, a focused use, and adequate hardware, skilled curating labor rather than deskilled microwork) they could “match human art”. However, one does not need to bend backwards to counter this smug argument that the technology will keep getting better and better; because accepting that also means accepting every single consequence with the technology gets worse. You’re compressing more and more thousands of thousands of slaves, with other millions on chip production, and more artists crushed into the database to get closer to the output of… a human artist? How many artists are among these millions? It will always be human output, then why the output has to be done by LLMs pressing of countless people into art-sludge?
Instead of proper counter-arguments, we have taken to calling the art empty and soulless. I abhor this stance, that non-normative intelligence cannot make art. And don’t be cute with the monkey thing; there is a big difference between non humans being able to produce art and courts deciding non-humans cannot hold IP. Above all, because this is not about some Sci Fi blue-skying; this is an argument born from the same hostility against neurodiverse folk. This is not about hurting the Basilisk, it is because this has been and is actively being said—even if not as openly admitted—as saying disabled or autistic people cannot do art. Or poor and lower class people. Or entire swaths of the population. Remember, these are not robots, they are literally an expanding number of exploited un-people in death-worlds enslaved to microwork. An argument about depersonalizing robots is always about depersonalizing humans! It has always been the case! I hope I have given you some better, actual arguments, so you don’t have to swat flies with fetid air.
There are endless opportunities lost, so many artists that we will never know, art that will never be made—all because of the cruel system this black box is pushing. There is, however, a sadness unique to TTRPGs.
This is the artform of cooperative collaborative storytelling. Everything we make is about collaboration, about operating as a collective of artists rather than the secluded bourgeoise artist — the genius producing art away from the world and peers and society who comes down from the studio to grace us with their brilliance. Technology has allowed for amazing collaborative art — imagine what something akin to LLMs could be done, with thousands and thousands of artists working together on a single piece. Imagine, more art like wplace (but you know, not israeli). Or even humbler ways to coordinate different artists through the challenges of collaboration.
TTRPGs can be folk art, it can be popular art, it can be proletarian art; it rarely is, because of the enclosures around it. Even those pale in comparison to LLMs. The way LLMs work seems like a mockery of how our art-form works.
We gather, synchronous and asynchronous collaborators—players, artists, writers, designers, etc. We subject ourselves to systems of cooperation, systems that incorporate our person-systems—our lives, experiences, cultures, wants, needs, being. Through this, we produce novel art, something far greater than the sum of us and impossible for a single one of us to achieve. On the surface level, LLMs work the same as our art-form: we have asynchronous and synchronous partners, we have a system that mediates our collaboration; we have many person-systems. However, how it actually works is like an exaggeration of the worst TTRPG: a silencing effect, unable to create something new, a broken system that has to be massaged into something lesser by countless captive participants. Hooking this to your game, aside from all the other points, also straps you to the worst TTRPG ever designed.
Why would we want to strap that to our art?
Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; 1Kaiser, Łukasz; Polosukhin, Illia (December 2017). "Attention is All you Need" (PDF). In I. Guyon and U. Von Luxburg and S. Bengio and H. Wallach and R. Fergus and S. Vishwanathan and R. Garnett (ed.). 31st Conference on Neural Information Processing Systems (NIPS). Advances in Neural Information Processing Systems. Vol. 30. Curran Associates, Inc. arXiv:1706.03762.
Guo, Daya, et al. "Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning." arXiv preprint arXiv:2501.12948 (2025).
Philip Jones. “Work Without the Worker:Labour in the Age of Platform Capitalism”
Arruda, E. P., & Pimenta, D. (2024). Challenges and implications of microwork in the age of artificial intelligence: A global socioeconomic analysis. Human Resources Management and Services, 6(2), 3452. https://doi.org/10.18282/hrms.v6i2.3452
https://left.eu/issues/discovering-the-hidden-workforce-microworkers-unveil-their-struggles/
Aman Khullar, Nikhil Nalin, Abhishek Prasad, Ann John Mampilli, and Neha Kumar. 2025. Nurturing Capabilities: Unpacking the Gap in Human-Centered Evaluations of AI-Based Systems. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 101, 1–18. https://doi.org/10.1145/3706598.3713278
Hellén-Halme, Kristina & Hellén-Halme, Bengt & Wenzel, Ann. (2011). The effect of aging on luminance of standard liquid crystal display (LCD) monitors. Oral surgery, oral medicine, oral pathology, oral radiology, and endodontics. 112. 237-42. 10.1016/j.tripleo.2011.02.044.
Rong, Anqi & Hansopaheluwakan‐Edward, Nina & Li, Dian. (2023). Analyzing the color availability of AI ‐generated posters based on K ‐means clustering: 74% orange, 38% cyan, 32% yellow, and 28% blue‐cyan. Color Research & Application. 49. 10.1002/col.22912.
“Medium Hot: Images in The Age of Heat”, Hito Steyerl, 2025
Chauhan, Noor. LLMs Are a Dead-End in the Search for General Machine Intelligence: A Review.