Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: New Jobs.
Some readers are undoubtedly upset that I have not devoted more space to the wonders of machine learning—how amazing LLMs are at code generation, how incredible it is that Suno can turn hummed melodies into polished songs. But this is not an article about how fast or convenient it is to drive a car. We all know cars are fast. I am trying to ask what will happen to the shape of cities.
The personal automobile reshaped streets, all but extinguished urban horses and their waste, supplanted local transit and interurban railways, germinated new building typologies, decentralized cities, created exurban sprawl, reduced incidental social contact, gave rise to the Interstate Highway System (bulldozing Black communities in the process), gave everyone lead poisoning, and became a leading cause of death among young people. Many parts of the US are highly car-dependent, even though a third of us don’t drive. As a driver, cyclist, transit rider, and pedestrian, I think about this legacy every day: how so much of our lives are shaped by the technology of personal automobiles, and the specific way the US uses them.
I want you to think about “AI” in this sense.
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Work.
As we deploy ML more broadly, there will be new kinds of work. I think much of it will take place at the boundary between human and ML systems. Incanters could specialize in prompting models. Process and statistical engineers might control errors in the systems around ML outputs and in the models themselves. A surprising number of people are now employed as model trainers, feeding their human expertise to automated systems. Meat shields may be required to take accountability when ML systems fail, and haruspices could interpret model behavior.
Incanters
LLMs are weird. You can sometimes get better results by threatening them, telling them they’re experts, repeating your commands, or lying to them that they’ll receive a financial bonus. Their performance degrades over longer inputs, and tokens that were helpful in one task can contaminate another, so good LLM users think a lot about limiting the context that’s fed to the model.
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Safety.
Software development may become (at least in some aspects) more like witchcraft than engineering. The present enthusiasm for “AI coworkers” is preposterous. Automation can paradoxically make systems less robust; when we apply ML to new domains, we will have to reckon with deskilling, automation bias, monitoring fatigue, and takeover hazards. AI boosters believe ML will displace labor across a broad swath of industries in a short period of time; if they are right, we are in for a rough time. Machine learning seems likely to further consolidate wealth and power in the hands of large tech companies, and I don’t think giving Amazon et al. even more money will yield Universal Basic Income.
Programming as Witchcraft
Decades ago there was enthusiasm that programs might be written in a natural language like English, rather than a formal language like Pascal. The folk wisdom when I was a child was that this was not going to work: English is notoriously ambiguous, and people are not skilled at describing exactly what they want. Now we have machines capable of spitting out shockingly sophisticated programs given only the vaguest of plain-language directives; the lack of specificity is at least partially made up for by the model’s vast corpus. Is this what programming will become?
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Psychological Hazards.
New machine learning systems endanger our psychological and physical safety. The idea that ML companies will ensure “AI” is broadly aligned with human interests is naïve: allowing the production of “friendly” models has necessarily enabled the production of “evil” ones. Even “friendly” LLMs are security nightmares. The “lethal trifecta” is in fact a unifecta: LLMs cannot safely be given the power to fuck things up. LLMs change the cost balance for malicious attackers, enabling new scales of sophisticated, targeted security attacks, fraud, and harassment. Models can produce text and imagery that is difficult for humans to bear; I expect an increased burden to fall on moderators. Semi-autonomous weapons are already here, and their capabilities will only expand.
Alignment is a Joke
Well-meaning people are trying very hard to ensure LLMs are friendly to humans. This undertaking is called alignment. I don’t think it’s going to work.
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Annoyances.
Like television, smartphones, and social media, LLMs etc. are highly engaging; people enjoy using them, can get sucked in to unbalanced use patterns, and become defensive when those systems are critiqued. Their unpredictable but occasionally spectacular results feel like an intermittent reinforcement system. It seems difficult for humans (even those who know how the sausage is made) to avoid anthropomorphizing language models. Reliance on LLMs may attenuate community relationships and distort social cognition, especially in children.
Optimizing for Engagement
Sophisticated LLMs are fantastically expensive to train and operate. Those costs demand corresponding revenue streams; Anthropic et al. are under immense pressure to attract and retain paying customers. One way to do that is to train LLMs to be engaging, even sycophantic. During the reinforcement learning process, chatbot responses are graded not only on whether they are safe and helpful, but also whether they are pleasing. In the now-infamous case of ChatGPT-4o’s April 2025 update, OpenAI used user feedback on conversations—those little thumbs-up and thumbs-down buttons—as part of the training process. The result was a model which people loved, and which led to several lawsuits for wrongful death.
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Information Ecology.
The latest crop of machine learning technologies will be used to annoy us and frustrate accountability. Companies are trying to divert customer service tickets to chats with large language models; reaching humans will be increasingly difficult. We will waste time arguing with models. They will lie to us, make promises they cannot possible keep, and getting things fixed will be drudgerous. Machine learning will further obfuscate and diffuse responsibility for decisions. “Agentic commerce” suggests new kinds of advertising, dark patterns, and confusion.
Customer Service
I spend a surprising amount of my life trying to get companies to fix things. Absurd insurance denials, billing errors, broken databases, and so on. I have worked customer support, and I spend a lot of time talking to service agents, and I think ML is going to make the experience a good deal more annoying.
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Culture.
Machine learning shifts the cost balance for writing, distributing, and reading text, as well as other forms of media. Aggressive ML crawlers place high load on open web services, degrading the experience for humans. As inference costs fall, we’ll see ML embedded into consumer electronics and everyday software. As models introduce subtle falsehoods, interpreting media will become more challenging. LLMs enable new scales of targeted, sophisticated spam, as well as propaganda campaigns. The web is now polluted by LLM slop, which makes it harder to find quality information—a problem which now threatens journals, books, and other traditional media. I think ML will exacerbate the collapse of social consensus, and create justifiable distrust in all kinds of evidence. In reaction, readers may reject ML, or move to more rhizomatic or institutionalized models of trust for information. The economic balance of publishing facts and fiction will shift.
Creepy Crawlers
ML systems are thirsty for content, both during training and inference. This has led
to an explosion of aggressive web crawlers. While existing crawlers generally
respect robots.txt or are small enough to pose no serious hazard, the
last three years have been different. ML scrapers are making it harder to run an open web service.
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Dynamics.
ML models are cultural artifacts: they encode and reproduce textual, audio, and visual media; they participate in human conversations and spaces, and their interfaces make them easy to anthropomorphize. Unfortunately, we lack appropriate cultural scripts for these kinds of machines, and will have to develop this knowledge over the next few decades. As models grow in sophistication, they may give rise to new forms of media: perhaps interactive games, educational courses, and dramas. They will also influence our sex: producing pornography, altering the images we present to ourselves and each other, and engendering new erotic subcultures. Since image models produce recognizable aesthetics, those aesthetics will become polyvalent signifiers. Those signs will be deconstructed and re-imagined by future generations.
Most People Are Not Prepared For This
The US (and I suspect much of the world) lacks an appropriate mythos for what “AI” actually is. This is important: myths drive use, interpretation, and regulation of technology and its products. Inappropriate myths lead to inappropriate decisions, like mandating Copilot use at work, or trusting LLM summaries of clinical visits.
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
Previously: Introduction.
ML models are chaotic, both in isolation and when embedded in other systems. Their outputs are difficult to predict, and they exhibit surprising sensitivity to initial conditions. This sensitivity makes them vulnerable to covert attacks. Chaos does not mean models are completely unstable; LLMs and other ML systems exhibit attractor behavior. Since models produce plausible output, errors can be difficult to detect. This suggests that ML systems are ill-suited where verification is difficult or correctness is key. Using LLMs to generate code (or other outputs) may make systems more complex, fragile, and difficult to evolve.
Chaotic Systems
LLMs are usually built as stochastic systems: they produce a probability distribution over what the next likely token could be, then pick one at random. But even when LLMs are run with perfect determinism, either through a consistent PRNG seed or at temperature T=0, they still seem to be chaotic systems.1 Chaotic systems are those in which small changes in the input result in large, unpredictable changes in the output. The classic example is the “butterfly effect”.2
Table of Contents
This is a long article, so I've broken it up into a series of posts, listed below. You can also read the full work as a PDF or EPUB.
This is a weird time to be alive.
I grew up on Asimov and Clarke, watching Star Trek and dreaming of intelligent machines. My dad’s library was full of books on computers. I spent camping trips reading about perceptrons and symbolic reasoning. I never imagined that the Turing test would fall within my lifetime. Nor did I imagine that I would feel so disheartened by it.
Around 2019 I attended a talk by one of the hyperscalers about their new cloud hardware for training Large Language Models (LLMs). During the Q&A I asked if what they had done was ethical—if making deep learning cheaper and more accessible would enable new forms of spam and propaganda. Since then, friends have been asking me what I make of all this “AI stuff”. I’ve been turning over the outline for this piece for years, but never sat down to complete it; I wanted to be well-read, precise, and thoroughly sourced. A half-decade later I’ve realized that the perfect essay will never happen, and I might as well get something out there.
This is bullshit about bullshit machines, and I mean it. It is neither balanced nor complete: others have covered ecological and intellectual property issues better than I could, and there is no shortage of boosterism online. Instead, I am trying to fill in the negative spaces in the discourse. “AI” is also a fractal territory; there are many places where I flatten complex stories in service of pithy polemic. I am not trying to make nuanced, accurate predictions, but to trace the potential risks and benefits at play.
This was surprisingly hard to find—hat tip to Reddit’s Nakkokaro and xBl4ck. Apple’s instructions for restoring an iPad Pro (3rd generation, 2018) seem to be wrong; both me and an Apple Store technician found that the Finder, at least in Tahoe, won’t show the iPad once it reboots in recovery mode. The trick seems to be that you need to unplug the cable, start the reset process, and during the reset, plug the cable back in:
- Unplug the USB cable from the iPad.
- Tap volume-up
- Tap volume-down
- Begin holding the power button
- After two roughly two seconds of holding the power button, plug in the USB cable.
- Continue holding until the iPad reboots in recovery mode.
Hopefully this helps someone else!
This is one of those things I probably should have learned a long time ago, but enzyme detergents are magic. I had a pair of white sneakers that acquired some persistent yellow stains in the poly mesh upper—I think someone spilled a drink on them at the bar. I couldn’t get the stain out with Dawn, bleach, Woolite, OxiClean, or athletic shoe cleaner. After a week of failed attempts and hours of vigorous scrubbing I asked on Mastodon, and Vyr Cossont suggested an enzyme cleaner like Tergazyme.
I wasn’t able to find Tergazyme locally, but I did find another enzyme cleaner called Zout, and it worked like a charm. Sprayed, rubbed in, tossed in the washing machine per directions. Easy, and they came out looking almost new. Thanks Vyr!
Also the vinegar and baking soda thing that gets suggested over and over on the web is nonsense; don’t bother.