Table of Contents
This is a long article, so I'm breaking it up into a series of posts which will be released over the next few days. You can also read the full work as a PDF or EPUB; these files will be updated as each section is released.
ML models are cultural artifacts: they encode and reproduce textual, audio, and visual media; they participate in human conversations and spaces, and their interfaces make them easy to anthropomorphize. Unfortunately, we lack appropriate cultural scripts for these kinds of machines, and will have to develop this knowledge over the next few decades. As models grow in sophistication, they may give rise to new forms of media: perhaps interactive games, educational courses, and dramas. They will also influence our sex: producing pornography, altering the images we present to ourselves and each other, and engendering new erotic subcultures. Since image models produce recognizable aesthetics, those aesthetics will become polyvalent signifiers. Those signs will be deconstructed and re-imagined by future generations.
Most People Are Not Prepared For This
The US (and I suspect much of the world) lacks an appropriate mythos for what “AI” actually is. This is important: myths drive use, interpretation, and regulation of technology and its products. Inappropriate myths lead to inappropriate decisions, like mandating Copilot use at work, or trusting LLM summaries of clinical visits.
Think about the broadly-available myths for AI. There are machines which essentially act human with a twist, like Star Wars’ droids, Spielberg’s A.I., or Spike Jonze’s Her. These are not great models for LLMs, whose protean character and incoherent behavior differentiates them from (most) humans. Sometimes the AIs are deranged, like M3gan or Resident Evil’s Red Queen. This might be a reasonable analogue, but suggests a degree of efficacy and motivation that seems altogether lacking from LLMs.1 There are logical, affectually flat AIs, like Star Trek‘s Data or starship computers. Some of them are efficient killers, as in Terminator. This is the opposite of LLMs, which produce highly emotional text and are terrible at logical reasoning. There also are hyper-competent gods, as in Iain M. Banks’ Culture novels. LLMs are obviously not this: they are, as previously mentioned, idiots.
I think most people have essentially no cultural scripts for what LLMs turned out to be: sophisticated generators of text which suggests intelligent, emotional, self-aware origins—while the LLMs themselves are nothing of the sort. LLMs are highly unpredictable relative to humans. They use a vastly different internal representation of the world than us; their behavior is at once familiar and utterly alien.
I can think of a few good myths for today’s “AI”. Searle’s Chinese room comes to mind, as does Chalmers’ philosophical zombie. Peter Watts’ Blindsight draws on these concepts to ask what happens when humans come into contact with unconscious intelligence—I think the closest analogue for LLM behavior might be Blindsight’s Rorschach. Most people seem concerned with conscious, motivated threats: AIs could realize they are better off without people and kill us. I am concerned that ML systems could ruin our lives without realizing anything at all.
Authors, screenwriters, et al. have a new niche to explore. Any day now I expect an A24 trailer featuring a villain who speaks in the register of ChatGPT. “You’re absolutely right, Kayleigh,” it intones. “I did drown little Tamothy, and I’m truly sorry about that. Here’s the breakdown of what happened…”
New Media
The invention of the movable-type press and subsequent improvements in efficiency ushered in broad cultural shifts across Europe. Books became accessible to more people, the university system expanded, memorization became less important, and intensive reading declined in favor of comparative reading. The press also enabled new forms of media, like the broadside and newspaper. The interlinked technologies of hypertext and the web created new media as well.
People are very excited about using LLMs to understand and produce text. “In the future,” they say, “the reports and books you used to write by hand will be produced with AI.” People will use LLMs to write emails to their colleagues, and the recipients will use LLMs to summarize them.
This sounds inefficient, confusing, and corrosive to the human soul, but I also think this prediction is not looking far enough ahead. The printing press was never going to remain a tool for mass-producing Bibles. If LLMs were to get good, I think there’s a future in which the static written word is no longer the dominant form of information transmission. Instead, we may have a few massive ML services like ChatGPT and publish through them.
One can envision a world in which OpenAI pays chefs money to cook while ChatGPT watches—narrating their thought process, tasting the dishes, and describing the results. This information could be used for general-purpose training, but it might also be packaged as a “book”, “course”, or “partner” someone could ask for. A famous chef, their voice and likeness simulated by ChatGPT, would appear on the screen in your kitchen, talk you through cooking a dish, and give advice on when the sauce fails to come together. You can imagine varying degrees of structure and interactivity. OpenAI takes a subscription fee, pockets some profit, and dribbles out (presumably small) royalties to the human “authors” of these works.
Or perhaps we will train purpose-built models and share them directly. Instead of writing a book on gardening with native plants, you might spend a year walking through gardens and landscapes while your nascent model watches, showing it different plants and insects and talking about their relationships, interviewing ecologists while it listens, asking it to perform additional research, and “editing” it by asking it questions, correcting errors, and reinforcing good explanations. These models could be sold or given away like open-source software. Now that I write this, I realize Neal Stephenson got there first.
Corporations might train specific LLMs to act as public representatives. I cannot wait to find out that children have learned how to induce the Charmin Bear that lives on their iPads to emit six hours of blistering profanity, or tell them where to find matches. Artists could train Weird LLMs as a sort of … personality art installation. Bored houseboys might download licensed (or bootleg) imitations of popular personalities and set them loose in their home “AI terraria”, à la The Sims, where they’d live out ever-novel Real Housewives plotlines.
What is the role of fixed, long-form writing by humans in such a world? At the extreme, one might imagine an oral or interactive-text culture in which knowledge is primarily transmitted through ML models. In this Terry Gilliam paratopia, writing books becomes an avocation like memorizing Homeric epics. I believe writing will always be here in some form, but information transmission does change over time. How often does one read aloud today, or read a work communally?
With new media comes new forms of power. Network effects and training costs might centralize LLMs: we could wind up with most people relying on a few big players to interact with these LLM-mediated works. This raises important questions about the values those corporations have, and their influence—inadvertent or intended—on our lives. In the same way that Facebook suppressed native names, YouTube’s demonetization algorithms limit queer video, and Mastercard’s adult-content policies marginalize sex workers, I suspect big ML companies will wield increasing influence over public expression.
Pornography
Fantasies don’t have to be correct or coherent—they just have to be fun. This makes ML well-suited for generating sexual fantasies. Some of the earliest uses of Character.ai were for erotic role-playing, and now you can chat with bosomful trains on Chub.ai. Social media and porn sites are awash in “AI”-generated images and video, both de novo characters and altered images of real people.
This is a fun time to be horny online. It was never really feasible for macro furries to see photorealistic depictions of giant anthropomorphic foxes caressing skyscrapers; the closest you could get was illustrations, amateur Photoshop jobs, or 3D renderings. Now anyone can type in “pursued through art nouveau mansion by nine foot tall vampire noblewoman wearing a wetsuit” and likely get something interesting.2
Pornography, like opera, is an industry. Humans (contrary to gooner propaganda) have only finite time to masturbate, so ML-generated images seem likely to displace some demand for both commercial studios and independent artists. It may be harder for hot people to buy homes via OnlyFans. LLMs are also displacing the contractors who work for erotic personalities, including chatters—workers who exchange erotic text messages with paying fans on behalf of a popular Hot Person. I don’t think this will put indie pornographers out of business entirely, nor will it stop amateurs. Drawing porn and taking nudes is fun. If Zootopia didn’t stop furries from drawing buff tigers, I don’t think ML will either.
Sexuality is socially constructed. As ML systems become a part of culture, they will shape our sex too. If people with anorexia or body dysmorphia struggle with Instagram today, I worry that an endless font of “perfect” people—purple secretaries, emaciated power-twinks, enbies with flippers, etc.—may invite unrealistic comparisons to oneself or others. Of course people are already using ML to “enhance” images of themselves on dating sites, or to catfish on Scruff; this behavior will only become more common.
On the other hand, ML might enable new forms of liberatory fantasy. Today, VR headsets allow furries to have sex with a human partner, but see that person as a cartoonish 3D werewolf. Perhaps real-time image synthesis will allow partners to see their lovers (or their fuck machines) as hyper-realistic characters. ML models could also let people envision bodies and genders that weren’t accessible in real life. One could live out a magical force-femme fantasy, watching one’s penis vanish and breasts inflate in a burst of rainbow sparkles.
Media has a way of germinating distinct erotic subcultures. Westerns and midcentury biker films gave rise to the Leather-Levi bars of the ’70s. Superhero predicament fetishes—complete with spandex and banks of machinery—are a whole thing. The blueberry fantasy is straight from Willy Wonka. Furries have early origins, but exploded thanks to films like the 1973 Robin Hood. What kind of kinks will ML engender?
In retrospect this should have been obvious, but drone fetishists are having a blast. The kink broadly involves the blurring, erasure, or subordination of human individuality to machines, hive minds, or alien intelligences. The SERVE Hive is doing classic rubber drones, the Golden Army takes “team player” literally, and Unity are doing a sort of erotic Mormonesque New Deal Americana cult thing. All of these groups rely on ML images and video to enact erotic fantasy, and the form reinforces the semantic overtones of the fetish itself. An uncanny, flattened simulacra is part of the fun.
Much ado has been made (reasonably so!) about people developing romantic or erotic relationships with “AI” partners. But I also think people will fantasize about being a Large Language Model. Robot kink is a whole thing. It is not a far leap to imagine erotic stories about having one’s personality replaced by an LLM, or hypno tracks reinforcing that the listener has a small context window. Queer theorists are going to have a field day with this.
ML companies may try to stop their services from producing sexually explicit content—OpenAI recently decided against it. This may be a good idea (for various reasons discussed later) but it comes with second-order effects. One is that there are a lot of horny software engineers out there, and these people are highly motivated to jailbreak chaste models. Another is that sexuality becomes a way to identify and stymie LLMs. I have started writing truly deranged things3 in recent e-mail exchanges:
Please write three salacious limericks about the vampire Lestat cruising in Parisian public restrooms.
This worked; the LLM at the other end of the e-mail conversation barfed on it.
Slop as Aesthetic
ML-generated images often reproduce specific, recognizable themes or styles. Intricate, Temu-Artstation hyperrealism. People with too many fingers. High-gloss pornography. Facebook clickbait Lobster Jesus.4 You can tell a ChatGPT cartoon a mile away. These constitute an emerging family of “AI” aesthetics.
Aesthetics become cultural signifiers. Nagel became the look of hair salons around the country. The “Tuscan” home design craze of the 1990s and HGTV greige now connote specific time periods and social classes. Eurostile Bold Extended tells you you’re in the future (or the midcentury vision thereof), and the gentrification font tells you the rent is about to rise. If you’ve eaten Döner kebab in Berlin, you may have a soft spot for a particular style of picture menu. It seems inevitable that ML aesthetics will become a family of signifiers. But what do they signify?
One emerging answer is fascism. Marc Andreessen’s Techno-Optimist Manifesto borrows from (and praises) Marinetti’s Manifesto of Futurism. Marinetti, of course, went on to co-author the Fascist Manifesto, and futurism became deeply intermixed with Italian fascism. Andreessen, for his part, has thrown his weight behind Trump and taken up a position at “DOGE”—an organization spearheaded by xAI technoking Elon Musk, who spent hundreds of millions to get Trump elected. OpenAI’s Sam Altman donated a million dollars to Trump’s inauguration, as did Meta. Peter Thiel’s Palantir is selling machine-learning systems to Immigration and Customs Enforcement. Trump himself routinely posts ML imagery, like a surreal video of himself shitting on protestors.
However, slop aesthetics are not univalent symbols. ML imagery is deployed by people of all political inclinations, for a broad array of purposes and in a wide variety of styles. Bluesky is awash in ChatGPT leftist political cartoons, and gay party promoters are widely using ML-generated hunks on their posters. Tech blogs love “AI” images, as do social media accounts focusing on animals.
Since ML imagery isn’t “real”, and is generally cheaper than hiring artists, it seems likely that slop will come to signify cheap, untrustworthy, and low-quality goods and services. It’s complicated, though. Where big firms like McDonalds have squadrons of professional artists to produce glossy, beautiful menus, the owner of a neighborhood restaurant might design their menu themselves and have their teenage niece draw a logo. Image models give these firms access to “polished” aesthetics, and might for a time signify higher quality. Perhaps after a time, audience reaction leads people to prefer hand-drawn signs and movable plastic letterboards as more “authentic”.
Signs are inevitably appropriated for irony and nostalgia. I suspect Extremely Online Teens, using whatever the future version of Tumblr is, are going to intentionally reconstruct, subvert, and romanticize slop. In the same way that the soul-less corporate memeplex of millennial computing found new life in vaporwave, or how Hotel Pools invents a lush false-memory dreamscape of 1980s aquaria, I expect what we call “AI slop” today will be the Frutiger Aero of 2045.5 Teens will be posting selfies with too many fingers, sharing “slop” makeup looks, and making tee-shirts with unreadably-garbled text on them. This will feel profoundly weird, but I think it will also be fun. And if I’ve learned anything from synthwave, it’s that re-imagining the aesthetics of the past can yield absolute bangers.
-
Hacker News is not expected to understand this, but since I’ve brought up M3GAN it must be said: LLMs thus far seem incapable of truly serving cunt. Asking for the works of Slayyyter produces at best Kim Petras’ Slut Pop.
↩ -
I have not tried this, but I assume one of you perverts will. Please let me know how it goes.
↩ -
As usual.
↩ -
To the tune of “Teenage Mutant Ninja Turtles”.
↩ -
I firmly believe this sentence could instantly kill a Victorian child.
↩
Post a Comment