The broadly learn and mentioned article “AI as Normal Technology” is a response towards claims of “superintelligence,” as its headline suggests. I’m considerably in settlement with it. AGI and superintelligence can imply no matter you need—the phrases are ill-defined and subsequent to ineffective. AI is best at most issues than most individuals, however what does that imply in observe, if an AI doesn’t have volition? If an AI can’t acknowledge the existence of an issue that wants an answer, and need to create that answer? It seems like using AI is exploding in every single place, significantly for those who’re within the expertise business. But exterior of expertise, AI adoption isn’t more likely to be quicker than the adoption of another new expertise. Manufacturing is already closely automated, and upgrading that automation would require vital investments of time and money. Factories aren’t rebuilt in a single day. Neither are farms, railways, or development firms. Adoption is additional slowed by the problem of getting from demo to an software operating in manufacturing. AI definitely has dangers, however these dangers have extra to do with actual harms arising from points like bias and information high quality than the apocalyptic dangers that many within the AI neighborhood fear about; these apocalyptic dangers have extra to do with science fiction than actuality. (If you discover an AI manufacturing paper clips, pull the plug, please.)
Still, there’s one sort of danger that I can’t keep away from eager about, and that the authors of “AI as Normal Technology” solely contact on, although they’re good on the true nonimagined dangers. Those are the dangers of scale: AI offers the means to do issues at volumes and speeds better than we now have ever had earlier than. The potential to function at scale is a big benefit, however it’s additionally a danger all its personal. In the previous, we rejected certified feminine and minority job candidates one by one; perhaps we rejected all of them, however a human nonetheless needed to be burdened with these particular person choices. Now we are able to reject them en masse, even with supposedly race- and gender-blind functions. In the previous, police departments guessed who was more likely to commit against the law one by one, a extremely biased observe generally often known as “profiling.”1 Most possible many of the supposed criminals are in the identical group, and most of these choices are fallacious. Now we could be fallacious about total populations immediately—and our wrongness is justified as a result of “an AI said so,” a protection that’s much more specious than “I was just obeying orders.”
We have to consider this sort of danger rigorously, although, as a result of it’s not nearly AI. It relies on different modifications which have little to do with AI, and the whole lot to do with economics. Back within the early 2000s, Target outed a pregnant teenage lady to her mother and father by analyzing her purchases, figuring out that she was more likely to be pregnant, and sending promoting circulars that focused pregnant ladies to her dwelling. This instance is a wonderful lens for considering by the dangers. First, Target’s programs decided that the lady was pregnant utilizing automated information evaluation. No people have been concerned. Data evaluation isn’t fairly AI, however it’s a really clear precursor (and will simply have been known as AI on the time). Second, exposing a single teenage being pregnant is just a small a part of a a lot larger drawback. In the previous, a human pharmacist might need observed a teen’s purchases and had a sort phrase together with her mother and father. That’s definitely an moral problem, although I don’t intend to put in writing on the ethics of pharmacology. We all know that folks make poor choices, and that these choices impact others. We even have methods to take care of these choices and their results, nonetheless inadequately. It’s a a lot larger problem that Target’s programs have the potential for outing pregnant ladies at scale—and in an period when abortion is prohibited or near-illegal in lots of states, that’s vital. In 2025, it’s sadly simple to think about a state lawyer basic subpoenaing information from any supply, together with retail purchases, which may assist them establish pregnant ladies.
We can’t chalk this as much as AI, although it’s an element. We must account for the disappearance of human pharmacists, working in impartial pharmacies the place they’ll get to know their clients. We had the expertise to do Target’s information evaluation within the Eighties: We had mainframes that might course of information at scale, we understood statistics, we had algorithms. We didn’t have large disk drives, however we had magtape—so many miles of magtape! What we didn’t have was the information; the gross sales befell at hundreds of impartial companies scattered all through the world. Few of these impartial pharmacies survive, a minimum of within the US—in my city, the final one disappeared in 1996. When nationwide chains changed impartial drugstores, the information grew to become consolidated. Our information was held and analyzed by chains that consolidated information from hundreds of retail places. In 2025, even the chains are consolidating; CVS could find yourself being the final drugstore standing.
Whatever you might take into consideration the transition from impartial druggists to chains, on this context it’s vital to grasp that what enabled Target to establish pregnancies wasn’t a technological change; it was economics, glibly known as “economies of scale.” That financial shift could have been rooted in expertise—particularly, the flexibility to handle provide chains throughout hundreds of stores—however it’s not nearly expertise. It’s in regards to the ethics of scale. This sort of consolidation befell in nearly each business, from auto manufacturing to transportation to farming—and, in fact, nearly all types of retail gross sales. The collapse of small report labels, small publishers, small booksellers, small farms, small something has the whole lot to do with managing provide chains and distribution. (Distribution is basically simply provide chains in reverse.) The economics of scale enabled information at scale, not the opposite approach round.
We can’t take into consideration the moral use of AI with out additionally eager about the economics of scale. Indeed, the primary era of “modern” AI—one thing now condescendingly known as “classifying cat and dog photos”—occurred as a result of the widespread use of digital cameras enabled picture sharing websites like Flickr, which may very well be scraped for coaching information. Digital cameras didn’t penetrate the market due to AI however as a result of they have been small, low cost, and handy and may very well be built-in into cell telephones. They created the information that made AI doable.
Data at scale is the required precondition for AI. But AI facilitates the vicious circle that turns information towards its people. How will we get away of this vicious circle? Whether AI is regular or apocalyptic expertise actually isn’t the difficulty. Whether AI can do issues higher than people isn’t the difficulty both. AI makes errors; people make errors. AI usually makes completely different sorts of errors, however that doesn’t appear vital. What’s vital is that, whether or not mistaken or not, AI amplifies scale.3 It allows the drowning out of voices that sure teams don’t need to be heard. It allows the swamping of artistic areas with uninteresting sludge (now christened “slop”). It allows mass surveillance, not of some folks restricted by human labor however of total populations.
Once we understand that the issues we face are rooted in economics and scale, not superhuman AI, the query turns into: How do we modify the programs wherein we work and stay in ways in which protect human initiative and human voices? How will we construct programs that construct in financial incentives for privateness and equity? We don’t need to resurrect the nosey native druggist, however we choose harms which are restricted in scope to harms at scale. We don’t need to rely on native boutique farms for our greens—that’s solely an answer for individuals who can afford to pay a premium—however we don’t need huge company farms implementing economies of scale by chopping corners on cleanliness.4 “Big enough to fight regulators in court” is a sort of scale we are able to do with out, together with “penalties are just a cost of doing business.” We can’t deny that AI has a task in scaling dangers and abuses, however we additionally want to understand that the dangers we have to worry aren’t the existential dangers, the apocalyptic nightmares of science fiction.
The proper factor to be afraid of is that particular person people are dwarfed by the dimensions of recent establishments. They’re the identical human dangers and harms we’ve confronted all alongside, often with out addressing them appropriately. Now they’re magnified.
So, let’s finish with a provocation. We can definitely think about AI that makes us 10x higher programmers and software program builders, although it stays to be seen whether or not that’s actually true. Can we think about AI that helps us to construct higher establishments, establishments that work on a human scale? Can we think about AI that enhances human creativity slightly than proliferating slop? To achieve this, we’ll must benefit from issues we can do this AI can’t—particularly, the flexibility to need and the flexibility to get pleasure from. AI can definitely play Go, chess, and plenty of different video games higher than a human, however it could’t need to play chess, nor can it get pleasure from sport. Maybe an AI can create artwork or music (versus simply recombining clichés), however I don’t know what it will imply to say that AI enjoys listening to music or work. Can it assist us be artistic? Can AI assist us construct establishments that foster creativity, frameworks inside which we are able to get pleasure from being human?
Michael Lopp (aka @Rands) lately wrote:
I feel we’re screwed, not due to the ability and potential of the instruments. It begins with the greed of people and the way their machinations (and success) prey on the ignorant. We’re screwed as a result of these nefarious people have been already wildly profitable earlier than AI matured and now we’ve given them even higher instruments to fabricate hate that results in helplessness.
Note the similarities to my argument: The drawback we face isn’t AI; it’s human and it preexisted AI. But “screwed” isn’t the final phrase. Rands additionally talks about being blessed:
I feel we’re blessed. We stay at a time when the instruments we construct can empower those that need to create. The obstacles to creating have by no means been decrease; all you want is a mindset. Curiosity. How does it work? Where did you come from? What does this imply? What guidelines does it comply with? How does it fail? Who advantages most from this current? Who advantages least? Why does it really feel like magic? What is magic, anyway? It’s an infinite set of situationally dependent questions requiring devoted focus and infectious curiosity.
We’re each screwed and blessed. The vital query, then, is tips on how to use AI in methods which are constructive and inventive, tips on how to disable their potential to fabricate hate—a capability simply demonstrated by xAI’s Grok spouting about “white genocide.” It begins with disabusing ourselves of the notion that AI is an apocalyptic expertise. It is, finally, simply one other “normal” expertise. The greatest strategy to disarm a monster is to understand that it isn’t a monster—and that accountability for the monster inevitably lies with a human, and a human coming from a selected advanced of beliefs and superstitions.
A vital step in avoiding “screwed” is to behave human. Tom Lehrer’s music “The Folk Song Army” says, “We had all the good songs” within the battle towards Franco, one of many twentieth century’s nice dropping causes. In 1969, in the course of the battle towards the Vietnam War, we additionally had “all the good songs”—however that battle ultimately succeeded in stopping the battle. The protest music of the Sixties took place due to a sure historic second wherein the music business wasn’t in management; as Frank Zappa stated, “These were cigar-chomping old guys who looked at the product that came and said, ‘I don’t know. Who knows what it is. Record it. Stick it out. If it sells, alright.’” The drawback with modern music in 2025 is that the music business could be very a lot in management; to turn into profitable, you need to be vetted, marketable, and fall inside a restricted vary of tastes and opinions. But there are alternate options: Bandcamp is probably not nearly as good another because it as soon as was, however it’s another. Make music and share it. Use AI that will help you make music. Let AI aid you be artistic; don’t let it substitute your creativity. One of the good cultural tragedies of the twentieth century was the professionalization of music. In the nineteenth century, you’d be embarrassed not to have the ability to sing, and also you’d be more likely to play an instrument. In the twenty first, many individuals received’t admit that they’ll sing, and instrumentalists are few. That’s an issue we are able to handle. By constructing areas, on-line or in any other case, round your music, we are able to do an finish run across the music business, which has at all times been extra about “industry” than “music.” Music has at all times been a communal exercise; it’s time to rebuild these communities at human scale.
Is that simply warmed-over Nineteen Seventies considering, Birkenstocks and granola and all that? Yes, however there’s additionally some actuality there. It doesn’t reduce or mitigate danger related to AI, however it acknowledges some issues which are vital. AIs can’t need to do something, nor can they get pleasure from doing something. They don’t care whether or not they’re enjoying Go or deciphering DNA. Humans can need to do issues, and we are able to take pleasure in what we do. Remembering that shall be more and more vital because the areas we inhabit are more and more shared with AI. Do what we do greatest—with the assistance of AI. AI will not be going to go away, however we are able to make it play our tune.
Being human means constructing communities round what we do. We must construct new communities which are designed for human participation, communities wherein we share the enjoyment in issues we like to do. Is it doable to view YouTube as a device that has enabled many individuals to share video and, in some circumstances, even to earn a dwelling from it? And is it doable to view AI as a device that has helped folks to construct their movies? I don’t know, however I’m open to the thought. YouTube is topic to what Cory Doctorow calls enshittification, as is enshittification’s poster baby TikTookay: They use AI to monetize consideration and (within the case of TikTookay) could have shared information with overseas governments. But it will be unwise to low cost the creativity that has come about by YouTube. It would even be unwise to low cost the variety of people who find themselves incomes a minimum of a part of their dwelling by YouTube. Can we make an identical argument about Substack, which permits writers to construct communities round their work, inverting the paradigm that drove the twentieth century information enterprise: placing the reporter on the middle slightly than the establishment? We don’t but know whether or not Substack’s subscription mannequin will allow it to withstand the forces which have devalued different media; we’ll discover out within the coming years. We can definitely make an argument that providers like Mastodon, a decentralized assortment of federated providers, are a brand new type of social media that may nurture communities at human scale. (Possibly additionally Bluesky, although proper now Bluesky is just decentralized in concept.) Signal offers safe group messaging, if used correctly—and it’s simple to neglect how vital messaging has been to the event of social media. Anil Dash’s name for an “Internet of Consent,” wherein people get to decide on how their information is used, is one other step in the appropriate course.
In the long term, what’s vital received’t be the functions. It shall be “having the good songs.” It shall be creating the protocols that permit us to share these songs safely. We must construct and nurture our personal gardens; we have to construct new establishments at human scale greater than we have to disrupt the prevailing walled gardens. AI may help with that constructing, if we let it. As Rands stated, the obstacles to creativity and curiosity have by no means been decrease.
Footnotes
- A research in Connecticut confirmed that, throughout visitors stops, members of nonprofiled teams have been truly extra more likely to be carrying contraband (i.e., unlawful medication) than members of profiled teams.
- Digital picture © Guilford Free Library.
- Nicholas Carlini’s “Machines of Ruthless Efficiency” makes an identical argument.
- And we now have no actual assure that native farms are any extra hygienic.