Ever since ChatGPT launched in the fall of 2022, it’s been impossible to scroll through a social media feed, to listen to a podcast, or to read anything in the news without getting blitzkrieged by declarations of generative AI’s indomitable ascendance. Entire industries and governments are scrambling to implement the hottest new technology. One recognizes in the urgency a sense of panic over the possibility of missing the bullet train of progress. “If you’re an artist, a teacher, a physician, a businessperson, a technical person,” former Google CEO Eric Schmidt warns, “if you’re not using this technology, you’re not going to be relevant compared to your peer groups and your competitors and the people who want to be successful. Adopt it, and adopt it fast.”
The tocsin call of technological paternalism insists that the dim-witted masses, namely, the rest of us, must have our technologies spoon fed to us. AI skepticism implies our ignorance of the allegedly iron laws of teleological progress. We should all just believe whatever tech companies are telling us and prostrate ourselves before Hegel’s World Spirit. As Alphabet CEO Sundar Pichai reminds us, “Every generation worries that the new technology will change the lives of the next generation for the worse—and yet, it’s almost always the opposite.” Futurist Ray Kurzweil, most famous for prophesizing that our consciousness will be uploaded to the cloud, like it or not, acknowledges that although AI can be abused “by superpowers that want to control people,” technological breakthroughs turn out much more positively than fear-mongers suggest. Software engineer-turned-venture capitalist Marc Andreessen, in his article “Why AI Will Save the World,” assures us in the same vein that we’ve seen all this before, as in electric lighting, automobiles, radio, and the Internet. Ergo, the skeptics are “irrational,” merely captured by a “social contagion” of hysteria.
Behind this willed ignorance of complicated human consequences lies a utopian doctrine that technological solutions should be proffered for, well, everything, that the messiness of mere human affairs ought to be shoehorned into the domain of engineering. Mark Zuckerberg has been unironically floating the idea of deploying personal AI companions and therapists to stanch rising levels of ltoneliness. Oracle founder and multibillionaire Larry Ellison, whose views on AI surveillance wouldn’t be out of place in a Chinese Communist Party memo, claims dashcams, doorbell cams, public cameras, AI cameras, and AI-controlled drones should be deployed to stop school shootings, discourage police misconduct, and ensure citizens are “on their best behavior.”
The doctrine that social and political problems can and should be addressed by reducing them to engineering challenges is not exactly new. In To Save Everything, Click Here (2013), written during the early days of social media and app mania, Belarusian writer Evgeny Morozov dubbed this impulse “solutionism”—the ideological compulsion to recast complex social situations as “neatly defined problems with definite, computational solutions” or as “transparent or self-evident processes that can be easily optimized.” The doctrine is even older. In 1966 nuclear physicist Alvin Weinberg, in his article “Can Technology Replace Social Engineering?”, introduced the concept of “Quick Technological Fixes,” namely, solutions that solve “immensely complicated social issues” via technological engineering. One “does not wait around trying to change people’s minds” when “crisp and beautiful” solutions can circumvent the need to persuade people to modify their behavior through democratic discussion.
Techno-solutions provide the technologist with the means to fix a given social problem while bypassing the need to change the behavior of the pesky people affected by that problem. According to Weinberg, “the technologist is appalled by the difficulties faced by the social engineer,” who must persuade people to behave differently. Persuasion is a “long, hard business” because “people don’t behave rationally,” at least not in terms of technological rationality. The appeal is one of scale. As Weinberg writes, technological solutions involve “many fewer individual decisions.” This point was voiced by Peter Thiel, the billionaire venture capitalist and co-founder of PayPal at a 2014 event hosted by The Baffler. Thiel, debating anarchist and anthropologist David Graeber on the topic of stagnation, commented on why he prefers engaging in projects through tech startups and tight-knit groups of investors rather than through social and political means. “The preference I have for startups rather than large movements,” he said, “is that you have to convince a much smaller group of people…that the future can look very different.”
There is a corollary: Solutions reached by technology deliver the rest of us from the burden of thinking about the implications of our behavior. Just as the technologist doesn’t have to worry about the “frustrating business” of “forcing people to behave more rationally,” we don’t have to change our behavior. As Weinberg writes, we don’t have to “forgo immediate personal gain or pleasure…in favor of longer term social gain.” When discussing the social problem of water scarcity, he underscores why he believes the social engineer’s solution—asking “people to behave more reasonably” and improve their attitudes toward the use of water—to be inferior: “Green lawns and clean cars and swimming pools are part of the good life, American style…and what right do we have to deny this luxury if there is some alternative to cutting down the water we use?”
But what if we persist in thinking about the implications of our freedom from thinking? We might conclude that we are being conditioned to accept a certain form of escapism, a delusional retreat from reality. We might conclude that we are being asked to become nihilists, at least if we agree with the philosopher Nolan Gertz, who calls nihilism “a way to evade reality.” Gertz explains in Technology and Nihilism (2024. as we have become accustomed to relying on technologies to resolve all of our problems, we are beginning to cultivate a faith that they will always be there for us, that some demiurge will always take over our personal and social duties and remove bothersome frictions standing in the way of our desires. About the problems we see around us, this faith “motivates us to do nothing.” This is how, according to Gertz, we manage to ignore climate change. Most of us “seem to be sitting comfortably indoors while leaving what’s happening outside their homes for governments and tech companies to sort out.”
The nihilism of our acquired will to do nothing, moreover, numbs us to the root causes of problems. Gertz points to productivity-enhancing lighting in offices. Fluorescent lighting, with its unnatural spectral composition, has been shown to disrupt circadian rhythms and to suppress melatonin production, which leads to fatigue and decreased cognitive performance in office workers. Swapping in natural lighting, businesses have increased employee alertness, productivity, and wellbeing. The nihilism operating here is that, in assuming that employee productivity, mood, and cognitive function are caused by lighting technology, and not by something else, we forfeit the will to inquire what else might be afflicting us. What if the root problem is that our jobs suck?
This same shortcircuiting of critical thinking transpires in the rush to integrate AI into nearly every domain of life. Take the case of Zuckerberg’s AI friend and therapist project. Whatever positive arguments one might concoct for using Large Language Models for these purposes (perhaps they are equitable options for those of us who can’t afford human therapists), they inevitably overbear the notion of probing more fundamental issues, such as what social dislocations may be causing the so-called “loneliness epidemic” and why mental health services from humans aren’t already more accessible to us.
Whether it’s office lighting, generative AI psychotherapists, or something else, technology, by eliminating our need to face the root causes of our ills, engenders a nihilistic mode of living, a will to do nothing about our problems, because it provides convenient mechanisms for us to avoid the difficult work of existing as moral agents who grapple with scourges for which real solutions may not be so simple, if they even exist.
This doesn’t mean technological solutions are inherently bad. What it does mean is that if we recognize the risks of nihilism engendered by overreliance on technological fixes we will be motivated to make more conscious decisions about what risks we are willing to accept. We will be in greater control of our lives. Failure to claim our right to think, conversely, encourages digital subservience. As media critic Douglas Rushkoff writes in Program or Be Programmed (2010), “In the emerging, highly programmed landscape ahead, you will either create the software or you will be the software. It’s really that simple: Program, or be programmed.” That ship has sailed in social media, where users are actually the product offered up to advertisers, who pay the companies for our attention. Uber and Lyft drivers, meanwhile, help to train the algorithms that will be used by the very self-driving cars that will replace them.
The logic is no different with AI. Consider the “humans in the loop,” the millions currently toiling to ensure that AI models learn accurately and run smoothly, labeling and categorizing images, text, and other data. These human workers perform critical grunt work for massive tech companies by providing the foundational “ground truth” that AI models require to learn accurately. Poorly paid, they are the convenience we experience when using Large Language Models. Behind each dazzling text generation from Claude or ChatGPT is a human hunched over a laptop computer who made it possible. Whether humans should be used by machines, or whether or not it’s bad that revolutionary generative AI products require the digital version of a Dickensian factory worker is a moral question requiring moral reasoning. Yet our will to engage our moral imaginations is precisely what technological solutions atrophy. Technical rationality is the efficient application of maximum means for optimal results, stripped of human meaning. The product, know-how, is an end in itself.
What’s coming is not merely digital slavery, but the planned obsolescence of the species through indifference as to whether we continue existing as humans.
A glimpse of this existential nihilism one may find in Ross Douthat’s recent New York Times interview with Peter Thiel, which covered topics of technological advancement, stagnation, even the antichrist. At one point in the conversation, which concerned life extension, transhumanism, and the “creation of a successor species or some kind of merger of mind and machine,” Douthat asked Thiel if he thought such efforts were merely hype. Thiel responded this way:
Douthat: Do you think that’s all irrelevant fantasy? Or do you think it’s just hype? Do you think people are raising money by pretending that we’re going to build a machine god? Is it hype? Is it delusion? Is it something you worry about?
Thiel: Um, yeah.
Douthat: I think you would prefer the human race to endure, right?
Thiel: Uh ——
Douthat: You’re hesitating.
Thiel: Yeah well, I —
Douthat: Yes?
Thiel: I don’t know, I — I would — I would, um —
Douthat: This is a long hesitation!
Thiel: There’s so many — there’s — there’s so many questions implicit in this.
Douthat: [emphatic] Should the human race survive?
Thiel: [pause] Uh…yes.
Douthat: OK.
Thiel: But — but uh — I would —
That such a softball question—Should the human race survive?—evoked such equivocation is no isolated eccentricity. AI is the latest phase in a long-running effort by the powers that be to turn (some) humans into things, because things don’t care if they continue existing. The solution? It’s easy. Think for yourself.