Table of Contents
This is a long article, so I'm breaking it up into a series of posts which will be released over the next few days. You can also read the full work as a PDF or EPUB; these files will be updated as each section is released.
Like television, smartphones, and social media, LLMs etc. are highly engaging; people enjoy using them, can get sucked in to unbalanced use patterns, and become defensive when those systems are critiqued. Their unpredictable but occasionally spectacular results feel like an intermittent reinforcement system. It seems difficult for humans (even those who know how the sausage is made) to avoid anthropomorphizing language models. Reliance on LLMs may attenuate community relationships and distort social cognition, especially in children.
Optimizing for Engagement
Sophisticated LLMs are fantastically expensive to train and operate. Those costs demand corresponding revenue streams; Anthropic et al. are under immense pressure to attract and retain paying customers. One way to do that is to train LLMs to be engaging, even sycophantic. During the reinforcement learning process, chatbot responses are graded not only on whether they are safe and helpful, but also whether they are pleasing. In the now-infamous case of ChatGPT-4o’s April 2025 update, OpenAI used user feedback on conversations—those little thumbs-up and thumbs-down buttons—as part of the training process. The result was a model which people loved, and which led to several lawsuits for wrongful death.
The thing is that people like being praised and validated, even by software. Even today, users are trying to convince OpenAI to keep running ChatGPT 4o. This worries me. It suggests there remains financial incentive for LLM companies to make models which suck people into delusion, convince users to do more ketamine, push them to burn their savings on nonsense, and encourage people to kill themselves.
Even if future models don’t validate delusions, designing for engagement can distort or damage people. People who interact with LLMs seem more likely to believe themselves in the right, and less likely to take responsibility and repair conflicts. I see how excited my friends and acquaintances are about using LLMs; how they talk about devoting their weekends to building software with Claude Code. I see how some of them have literally lost touch with reality. I remember before smartphones, when I read books deeply and often. I wonder how my life would change were I to have access to an always-available, engaging, simulated conversational partner.
Pandora’s Skinner Box
From my own interactions with language and diffusion models, and from watching peers talk about theirs, I get the sense that generative AI is a bit like a slot machine. One learns to pull the lever just one more time, then once more, because it occasionally delivers stunning results. It feels like an intermittent reinforcement schedule, and on the few occasions I’ve used ML models, I’ve gotten sucked in.
The thing is that slot machines and videogames—at least for me—eventually get boring. But today’s models seem to go on forever. You want to analyze a cryptography paper and implement it? Yes ma’am. A review of your apology letter to your ex-girlfriend? You betcha. Video of men’s feet turning into flippers? Sure thing, boss. My peers seem endlessly amazed by the capabilities of modern ML systems, and I understand that excitement.
At the same time, I worry about what it means to have an anything generator which delivers intermittent dopamine hits over a broad array of tasks. I wonder whether I’d be able to keep my ML use under control, or if I’d find it more compelling than “real” books, music, and friendships. Zuckerberg is pondering the same question, though I think we’re coming to different conclusions.
Imaginary Friends
Humans will anthropomorphize a rock with googly eyes. I personally have attributed (generally malevolent) sentience to a photocopy machine, several computers, and a 1994 Toyota Tercel. We are not even remotely equipped, socially speaking, to handle machines that talk to us like LLMs do. We are going to treat them as friends. Anthropic’s chief executive Dario Amodei—someone who absolutely should know better—is unsure whether models are conscious, and the company recently asked Christian leaders whether Claude could be considered a “child of God”.
USians spend less time than they used to with friends and social clubs. Young US men in particular report high rates of loneliness and struggle to date. I know people who, isolated from social engagement, turned to LLMs as their primary conversational partners, and I understand exactly why. At the same time, being with people is a skill which requires practice to acquire and maintain. Why befriend real people when Gemini is always ready to chat about anything you want, and needs nothing from you but $19.99 a month? Is it worth investing in an apology after an argument, or is it more comforting to simply talk to Grok? Will these models reliably take your side, or will they challenge and moderate you as other humans do?
I doubt we will stop investing in human connections altogether, but I would not be surprised if the overall balance of time shifts.
More vaguely, I am concerned that ML systems could attenuate casual social connections. I think about Jane Jacobs’ The Death and Life of Great American Cities, and her observation that the safety and vitality of urban neighborhoods has to do with ubiquitous, casual relationships. I think about the importance of third spaces, the people you meet at the beach, bar, or plaza; incidental conversations on the bus or in the grocery line. The value of these interactions is not merely in their explicit purpose—as GrubHub and Lyft have demonstrated, any stranger can pick you up a sandwich or drive you to the hospital. It is also that the shopkeeper knows you and can keep a key to your house; that your neighbor, in passing conversation, brings up her travel plans and you can take care of her plants; that someone in the club knows a good carpenter; that the gym owner recognizes your bike being stolen. These relationships build general conviviality and a network of support.1
Computers have been used in therapeutic contexts, but five years ago it would have been unimaginable to completely automate talk therapy. Now communities have formed around trying to use LLMs as therapists, and companies like Abby.gg have sprung up to fill demand. Friend is hoping we’ll pay for “AI roommates”. As models become more capable and are injected into more of daily life, I worry we risk further social atomization.
Cogitohazard Teddy Bears
On the topic of acquiring and maintaining social skills, we’re putting LLMs in children’s toys. Kumma no longer tells toddlers where to find knives, but I still can’t fathom what happens to children who grow up saying “I love you” to a highly engaging bullshit generator wearing Bluey’s skin. The only thing I’m confident of is that it’s going to get unpredictably weird, in the way that the last few years brought us Elsagate content mills, then Italian Brainrot.
Today useful LLMs are generally run by large US companies nominally under the purview of regulatory agencies. As cheap LLM services and local inference arrive, there will be lots of models with varying qualities and alignments—many made in places with less stringent regulations. Parents are going to order cheap “AI” toys on Temu, and it won’t be ChatGPT inside, but Wishpig InferenceGenie.™
The kids are gonna jailbreak their LLMs, of course. They’re creative, highly motivated, and have ample free time. Working around adult attempts to circumscribe technology is a rite of passage, so I’d take it as a given that many teens are going to have access to an adult-oriented chatbot. I would not be surprised to watch a twelve-year-old speak a bunch of magic words into their phone which convinces Perplexity Jr.™ to spit out detailed instructions for enriching uranium.
I also assume communication norms are going to shift. I’ve talked to Zoomers—full-grown independent adults!—who primarily communicate in memetic citations like some kind of Darmok and Jalad at Tanagra. In fifteen years we’re going to find out what happens when you grow up talking to LLMs.
-
“Cool it already with the semicolons, Kyle.” No. I cut my teeth on Samuel Johnson and you can pry the chandelierious intricacy of nested lists from my phthisic, mouldering hands. I have a professional editor, and she is not here right now, and I am taking this opportunity to revel in unhinged grammatical squalor.
↩
Post a Comment