Elina Choi and Joshua Babcock

AI anxieties are widespread today. But how new are these concerns?
Generative AI technologies have come on the scene relatively recently, via large language models (LLMs) and associated chatbots like ChatGPT or image generators like DALL-E and Sora. However, the fears and narratives surrounding these technologies have deeper roots. While fear narratives might appear to point unequivocally toward the properties or effects of a feared object, this is only part of the picture. As scholars of modernity and psychosis have long known, fear narratives point back toward the narrator as much as—if not more than—the object or focus of anxiety.
To critically engage with and reflect on this anxiety, our blog post begins by revisiting the central thesis of Hofstadter’s classic 1964 essay, The Paranoid Style in American Politics. While Hofstadter located the paranoid style firmly within the McCarthy-era American right, we show instead that, when it comes to outlets’ political leanings, the paranoid style is highly malleable regarding AI. Regardless of the outlet, in other words—whether Fox News or The New York Times—authors consistently narrate the impacts and functioning of AI in a profoundly paranoid way.
At a time when news sources are consistently categorized as “inherently different” poles of a partisan binary, we suggest instead that paranoia is a pervasive thread running through reflexively public, deeply-mediatized discussions of AI. We ultimately advocate for an interactional approach to both AI and AI’s corresponding paranoid narratives, weaving in Eve Kosofsky Sedgwick’s meditations on paranoid versus restorative reading.
The Paranoid Style as American Cultural Genre
Hofstadter’s classic 1964 essay begins by discussing three newspaper articles from 1951, 1895, and 1855. Drawn from a “Texas newspaper article of 1855,” the final excerpt sounds the alarm about a supposed plot by the Catholic Church to “[bring about] our [i.e., America’s] destruction and…the extinction of our political, civil, and religious institutions” (78). Read alongside quotes from Senator McCarthy and the Populist Party Manifesto, the supposed Pope-led conspiracy seems less like a laughable, distant relic of outdated U.S. Protestant fears. Instead, the texts together reveal the continuities of paranoia as a “strategically-deployable shifter” (Urciuoli 2003) in American politics, a transposable “frame of mind” always in search of “a different villain” whether that be Catholics, Masons, or generative AI.
We revisit the “Paranoid Style” to track its continued resonances in the present, but it is important to distinguish the terms “style,” “stance,” or “reading” from identities imagined as fixed, individual essences or sorting hat-style labels, as linguistic anthropologists and critical sociolinguists like Mary Bucholtz and Kira Hall have argued. We here find it instructive to turn to Eve Kanofsky Sedgwick. In her provocative introduction, “Paranoid Reading and Reparative Reading; or, You’re So Paranoid, You Probably Think This Introduction is About You,” Sedgwick points out that “the paranoid” is not a pointed label for a specific political or personal demographic, “and certainly not [a] stable personality [type],” but a “changing and heterogeneous relational [stance]” that is fluid and open to change (128). Sedgwick further argues that due to the normalization of paranoid thinking “at every point in the political spectrum” (143), to hold “anything but a paranoid critical stance has come to seem naïve, pious, or complaisant” (126). Far from an exclusively right-wing or historically particular framework, in other words, paranoia has become broadly and deeply entrenched across seemingly “normal” (Hofstadter 1964, 77) political positions.
Media Reportage as Analytic Object
In this blog, we don’t attempt to argue about or explain the technical workings of AI. Instead, we explore the paranoid social stances surrounding AI.
To do this, we input a selection of articles from both Fox News and The New York Times into ChatGPT and asked: what is the author of this article afraid of? This move has, of course, become a well-worn, often unmotivated trope in commentary on generative AI. However, we employ this technique neither to treat the chatbot as an unproblematic analytic tool nor to assert that it is designed using questionable underlying assumptions about fear, subjectivity, and interiority—facts a critical observer is prepared to accept even without the analysis. Rather, we started here to open up the possibility of something genuinely new, unexpected, or surprising—for Sedgwick, a defining characteristic of reparative reading—in the encounter among us (the researchers), the written texts, and the AI language model.
And something new did, in fact, emerge for us. After querying the articles individually, we pasted each of the AI-generated paragraphs back into Chat GPT and asked for some repeated keywords. ChatGPT responded with: artificial intelligence (AI), facial recognition, political orientation, societal implications, discrimination, privacy, democracy, control, surveillance, regulation, existential threat, unpredictability, and human intelligence. When asked for the most highly repeated adjectives, ChatGPT identified: deep, truly, really, genuinely, potentially, uncontrolled, existential, responsible, profound, widespread, unforeseen, dangerous, and significant.
When asked “What are the author’s emotions behind the articles I pasted today?” ChatGPT responded:
The emotions behind the articles you provided today range from genuine concern and fear to apprehension and worry. The authors express a deep sense of unease about the potential consequences of various technological advancements, particularly in the realm of artificial intelligence. They convey feelings of dread regarding the erosion of privacy, threats to democracy, societal upheaval, and the loss of human relevance in the face of advancing technology. Overall, there’s a palpable sense of alarm and urgency in addressing the risks associated with these developments.
The term we emphasized above—erosion—was not one that had immediately stood out to us. Seeing it here, we knew we had to probe further.

Image by Pavel Danyuk for Pixabay
Narrating Moral Panic and Existential Threat
As we turned our critical attention both to ChatGPT’s responses and the articles themselves, themes of erosion, especially “erosion of democracy” and “erosion of authentic human connections,” were indeed salient. As George Lakoff and Mark Johnson argued long ago in Metaphors We Live By, metaphors take on a life of their own by smuggling in additional layers beyond what’s featured, denotationally, on the surface. As a description of a natural process, “erosion” carries a sense of inevitability and irreversibility that terms like “disregarding” or “attacking” don’t.
Despite this supposed naturalness and inevitability, however, “erosion” is a cause for alarm in mediatized discourses because it is supposedly new. It almost goes without saying that AI is not exceptional in this regard, but is just one point in a repeated pattern of moral panic over the supposed “erosion” of society and democracy brought about by the latest new technology (Thurlow 2006), from video games to the internet to the invention of writing. Even not-so-new technologies like smartphones recur as objects of paranoid reflection on “erosion”: as the Family section of The New York Times warned in 2023, “Ignoring a partner in favor of your phone, or “phubbing,” can lead to feelings of distrust and ostracism. Here’s how to stop.”
Anxieties focused on the experience of being “existentially threatened” and getting “outsmarted” by a nebulous, machinic enemy further rehearse long-standing fears surrounding human relevance and agency. In line with Hofstadter’s argument that “what is at stake [in paranoid thinking] is always a conflict between absolute good and absolute evil” (82), the paranoid notions of “existential threat” and the potential to be “outsmarted” by AI take on a moral dimension. The human vs. technology trope is neither new nor unique to journalistic, mediatized discourses, of course. A 2021 children’s movie called The Mitchells vs. the Machines—where “A quirky, dysfunctional family’s road trip is upended when they find themselves in the middle of the robot apocalypse and suddenly become humanity’s unlikeliest last hope” (from IMDB)—brings life to the paranoid’s greatest fear: an “uncontrollable” AI monster disrupting our otherwise “controllable” and “controlled” human society.
Sometimes, the moral binary within the paranoid style of AI discourses places fears of human interference at the heart of the matter. In addition to discourses about AI’s highly calculated behaviors, there are also claims that bad individuals and foreign nation-states are the underlying sources of “threat,” and not, for instance, AI companies.
As we hope is clear, AI is not unique in kind from other new technologies, even if it is commonly described as wholly unprecedented in human history. New technologies, in other words, have always shocked the current environment and challenged the perceived boundaries of reality. PhotoShop altered the previous notion of photos as pristinely accurate snapshots of time (notwithstanding the manual/analog photo-manipulation techniques that have existed almost as long as photography), and the invention of the camera permanently changed the concepts of memory, truth-to-nature, and objectivity. We here return to Sedgwick, who argues that “Paranoia is anticipatory” (130), “blotting out any sense of the possibility of alternative ways of understanding or things to understand” (131). This has the potential to block the goals of positive change (136). Importantly, however, the blockage is never total. Restorative reading is always still possible.
Conclusion
While Sedgwick had a lot to say about paranoid reading, she was admittedly sparse when it came to its alternative: reparative reading. Yet to expand on what we foreshadowed earlier, she does spell out the crucial features of reparative reading in this passage:
“[T]o read from a reparative position is to surrender the knowing, anxious paranoid determination that no horror, however apparently unthinkable, shall ever come to the reader as new; to a reparatively positioned reader, it can seem realistic and necessary to experience surprise. Because there can be terrible surprises…there can also be good ones. Hope, often a fracturing, even a traumatic thing to experience, is among the energies by which the reparatively positioned reader tries to organize the fragments and part-objects she encounters or creates. Because the reader has room to realize that the future may be different from the present, it is also possible for her to entertain such profoundly painful, profoundly relieving, ethically crucial possibilities as that the past, in turn, could have happened differently from the way it actually did.”
It is in this spirit that we suggest novelty, surprise, letting go of totalizing desires, and indeed, allowing ourselves to hope. In doing so, we aim to open this space of hope up for others, too, if only in a small way.
Of course, we don’t mean to suggest that there’s no harm being done with AI. Long before ChatGPT became an object of elite white anxiety—with fears surrounding AI as a harbinger of “human extinction” bearing a strong resemblance to fears about “white replacement” voiced by some groups on the right—Black women and women of color like Timnit Gebru, Rumman Chowdhury, Safiya Noble, Seeta Peña Gangadharan, and Joy Buolamwini were rigorously studying and calling attention to the violence being perpetrated against intersectionally marginalized groups through digital technologies. Workers in postcolonial Anglophone locations like Kenya are paid $2 per day to consume and tag violent, disturbing, and traumatizing content so users at the core of the Angloscene don’t have to. Visual artists and musicians discovered in 2023 that their works were used en masse to train generative AIs without their consent, prompting class-action lawsuits that remain unresolved. Google’s new AI Overviews feature kept telling people to eat rocks and put glue on their pizza to keep the cheese from sliding off.
While the paranoid style embodies a generalized proposition that “things are bad and getting worse’’—a proposition that Sedgwick points out is “immune to refutation”—it also fails to render any specific predictive value or direction for oppositional strategy (142). Of course, “paranoid people or movements can perceive true things” (142). Yet this is where reparative reading comes into play. By locating the effects of AI firmly within interactions that always involve human agents, we can better keep in view the role played by fear narratives that get projected onto new technologies by individuals and groups. We can also work toward genuine accountability for AI companies—and the actually-existing people who manage and profit from them—rather than continuing the paranoid acceptance that the nameless, faceless “Amoral Superman” (Hofstadter 1964, 85) of AI is real in exactly the ways that anxious discourses claim.
About the Authors
Elina Choi is a junior at Brown University studying English and Entrepreneurship. Interested in the intersections between business, written communication, and AI, she began researching the linguistic implications of AI through Brown University’s UTRA award and has since conducted culinary economics and IP law research for Liberation Cuisine. Elina is from Seoul, South Korea.
Joshua Babcock is an assistant professor of Anthropology at Brown University. His current book project explores how technology, language, and race co-naturalize one another across scales and shape the conditions of possibility for belonging to the image of Singapore. In his other work, he studies the Singapore Sling, U.S. school board politics (with Ilana Gershon), and a ghost town called Singapore, Michigan. Josh is also the Communications Director for the General Anthropology Division of the American Anthropological Association.
