
https://www.dukeupress.edu/in-the-land-of-the-unreal
Stefan Helmreich: Your first book, Placing Outer Space, asked not only how place was imagined on and for such off-Earth entities as exoplanets, but also placed such imaginations within the (mostly) American cultural contexts within which they emerged. In In The Land of the Unreal, you again place a technoscientific imagination with respect to a social address. This time the technoscientific object is virtual reality and the place is Los Angeles. Can you get us up to speed on what difference LA makes to the VR that is created there?
Lisa Messeri: Thanks, Stefan. That comparison between the two books is a great place to start. As you point out, in both books I’m interested in the relationship between place and technoscientific endeavors. In the first, I catalogued how scientific elites made place in the cosmos. In this one, I flipped figure and ground and was interested in how place – in the sense of geographic location – shaped technological work. Ethnographically, it was quickly apparent that conversations and development around VR in specific and tech more generally in LA felt different from my prior experiences in Silicon Valley (and spending a decade immersed in MIT’s tech culture). In the book, I therefore attempt to tease out LA’s technological terroir; the features of local geography, history, and expertise that cultivate a different sensibility around tech. Hollywood’s impact on the political economy of LA is of course a driving factor, but so is the longer history of the region’s aerospace and military histories that, as I came to understand, have long been entangled with the entertainment industry. Alongside these different institutional configurations for tech development is also the simple fact of a geographic removal that provided LA’s VR scene space to be something slightly different than San Francisco’s Silicon Valley dominated activities. To be clear, the differences I attribute to place’s influence on tech development and conversations are subtle. They slowly became apparent throughout a year of ethnographic research. On the surface, one could find many connections between VR as it existed in LA and other globally situated hot spots. But teasing out these subtle differences became essential for understanding how particularly claims about VR – for example, that it could supposedly be an empathy machine – came to hold power, both in LA and beyond.
Stefan Helmreich: Anthropologists have taken an interest in virtual realities for a spell now. I think of early speculations in 1990s cyborg anthropology. And then I turn to Tom Boellstorff’s 2008 Coming of Age in Second Life: An Anthropologist Explores the Virtually Human and Thomas Malaby’s 2011 Making Virtual Worlds: Linden Lab and Second Life— which took somewhat opposed approaches, with Boellstorff doing his fieldwork “in” Second Life and Malaby looking at the physical workplace of Linden Labs. I wonder if you might say something about how you think about the relationship — or, even, difference? — between ethnography in virtual reality and ethnography about virtual reality.
Lisa Messeri: When I first began this project (I can offer my NSF proposal as proof!), I imagined I’d be studying the institutions that develop VR, following Malaby, and the sociality of the virtual, following Boellstorff. While such a “mixed reality” project is possible and admirable, it quickly became clear that they require distinct methods and do not necessarily have intuitive points of connection. In the end, I conducted all of my fieldwork IRL and so even though this is an ethnography about virtual reality (and, to some extent, aims at theorizing the virtual) it is not a virtual/digital ethnography. This was partly because the VR experiences whose creation I was documenting were largely cinematic VR rather than social VR; meaning they were experienced individually and not part of a persistent, inhabited virtual world. While I did a lot of VR during my fieldwork, the sociality I was studying as an anthropologist all occurred outside of the headset.
This distinction is really important, as Boellstorff points out in a recent article, “Toward Anthropologies of the Metaverse.” So maybe the metaverse has already peaked and fallen, but the point he is making is that the virtual (be it Second Life or Meta’s Metaverse) is not necessarily something that is only experienceable in virtual reality. When these two are conflated, the field that is taken to be the virtual or the metaverse is prematurely constricted. I agree with this, but the article limits anthropologies of the metaverse (perhaps we will update this to include anthropologies of spatial computing, in light of Apple’s Vision Pro) to be studies of virtual (in headset or not) sociality. Here I would interject and suggest that anthropology’s potential is to create an anthropology of the virtual/metaverse/spatial computing that capaciously includes both ethnography in the virtual and ethnographies about the virtual. Indeed, the conflation Boellstorff points out between the virtual and virtual reality is ethnographically interesting! How might we understand Apple’s and Meta’s insistence that the future they are promising comes in the form of a headset? And how do we understand the communities that form around the promises of such futures (whether they are formed in good faith [to make a quick buck] or not)? We need to study both the makers of technology and the users of technology. Even if this is not a single project for one investigator, they are necessarily complimentary, as Boellstorff and Malaby’s initial work on Second Life demonstrated.
Stefan Helmreich: Your book is keen to look at the work of women innovators in VR, especially in the immediate (and at the time very encouraging) aftermath of #MeToo. You encountered some women who made claims that their work might generate more compassionate technological development — claims that you usefully complicate by directing the reader to feminist work on the multiple and not always straightforward politics that arrive any time notions of care are invoked. Can you tell us what it was like to be in conversation with some of these innovators — in ways that both heard them out and that offered your own feminist STS expertise to the discussion?
Lisa Messeri: My biggest concern at the start of this project was that I knew I would be in conversation with people whose VR projects I might not fully be behind. After all, the impetus for this project was trying to understand how a community comes to believe that their technology can make the world a better place. Given ALL the studies we have about how well-meaning technologists (and technologies) often … do the opposite, I was very aware of my positionality. Therefore, going into the field, my strategy was that for those who would let me be a participant observer, I would take the participation seriously. I was not going to sit back with my notebook and document practices that (unintentionally!) inscribed problematic politics into VR experiences, but I was going to participate – I was going to offer my feminist STS lens as a resource for these teams. A small example was when I was working on a VR experience about a mission to Mars, I was asked to read a preliminary script. It was riddled with references to “colonizing” Mars. I suggested we find other language, noting how that loaded metaphor presupposes certain social (including human-nonhuman) relations. We rewrote the script and that conversation led to a slightly different ethos behind the fictional Mars world that continued to be built out. Anyway, that was an easy enactment of participation.
There were harder situations where projects were admirable – and too far along in their development to change – but I could see potential pitfalls. I still wanted to document these cases, as it was important that I hear the creators out (as you say) and really try to understand the well-meaning intention behind such projects. As my year of fieldwork progressed, and as these relationships became more trusting, I would be able to discuss some of my concerns and never was there a case where these concerns were rejected. And most of the time, these concerns weren’t even a surprise but ones that the innovators themselves had been privately puzzling over.
When it came to writing, I employed several strategies to layer in the critique. Sometimes, my interlocutors would open the door to critique with their own observations. Other times, I made the object of critique not individuals or even projects, but the structures and situations that make potentially harmful VR experiences seem potentially helpful.
Stefan Helmreich: “Unreal” — Can you talk about what this/word concept means from the point of view of your interlocutors? How and where do their uses of the term resonate — and not — with your use of the idea?
Lisa Messeri: The unreal got stuck in my brain really early in fieldwork. I had gone on a studio tour at Paramount and our guide played a clip from the 1961 Jerry Lewis movie The Errand Boy, which began with an arial shot of Los Angeles that slowly zoomed in on Paramount studio. A voiceover narrated, “This is Hollywood. Land of the real and the unreal.” This was in my mind as I began to better understand LA as a city and VR as a technology. The unreal would pop up in weird places. Usually it was a colloquialism, exclaiming that a really cool VR experience was “unreal.” But after fieldwork, I also came across a 2016 marketing report that was tracing the trend of “unrealities.” This trend included escape rooms and astrology and Snapchat filters and meditation retreats and, of course, virtual reality. These things are all appealing not exclusively because of the fantasy they offer, but because that fantasy is experienced in dialogue with a reality that is being pushed against. In the book, I define the unreal as that which “holds in tension an extraordinary rendering of reality with what might be thought of as an everyday reality.” There have always been multiple realties, but the unreal marks moments when such multiplicity demands attention. So, saying a VR experience is “unreal” is from a delight in knowing your body was in the physical world but having an experience that is deeply at odds with those surroundings.
As I was conducting fieldwork in 2018 – right in the middle of the Trump presidency and its explosion of alternative facts – I marked US politics as also unreal, in so far as many liberals struggled to comprehend Trump’s “extraordinary rendering of reality” with how they understood reality. This political reading comes from an intellectual genealogy of theorizing US politics from LA, from Baudrillard and Eco’s hyperreal to Soja’s real-and-imagined thirdspace. To these 20th century theories, I add the 21st century twist of the unreal.
Stefan Helmreich: You write in the book about VR boosters as sometimes eager to pitch their projects as in the service of empathy. And you point us to the fact that the register of empathy can be a way of avoiding questions to do with institutions, the distribution of resources, politics — with things beyond the scale of the sheerly well-meaning individual. The silicon panic of our time is to do with generative AI — and some thinkers, like Sherry Turkle, have kept their eyes on the rise of tools promising artificial intimacy, thinking here about therapy chatbots that promise artificial empathy. What, if anything, do you think recent AI development has done for/to the promises or VR? Is empathy still important? Or are other terms of conversation now surfacing?
Lisa Messeri: When I was doing fieldwork, VR was in frequent conversation with blockchain and AI as a triumvirate that would usher in the future. I mention this to mark that they are part of the same ecology – and draw on many of the same institutions, resources, people, and so on. So in general, much of what I write about in my book is a primer for today’s genAI moment. And the persistence of empathy as a category is a frightening reminder! An October 7, 2023 headline from the Wall Street Journal asked “Can AI do Empathy Even Better than Humans?”
But all the thinking that has been done about VR and empathy gives us a head start on how to think about AI and empathy. Turkle was one of the first thinkers I looked to when getting at the empathy angle. In a chapter in which she responds directly to claims that VR is an empathy machine, she worries that “the feeling of conversation becomes conversation enough.” Denny Profitt, a psychologist at UVA who provided me with my first exposure to VR, observed to me sometime around 2015 that VR empathy experiences were potentially dangerous because they could induce a feeling of false catharsis. In caring deeply, do you forget to actually act in a way that remedies the problem? Or as Nakamura has put it, what is the morality of “feeling good about feeling bad”?
In figuring out how to think about empathy – be it AI- or VR-induced – I have been guided by Atanasoski and Vora’s Surrogate Humanity. They show how technologies that seek to replace or conceal human labor very often replicate dehumanizing logics of race, gender, and colonialism. So, yes, we need to be incredibly wary of VR that promises instant empathy or AI that does empathy better than a human. But, one of the case studies of my book suggests that replacing or concealing human labor isn’t the only strategy for deploying VR (or possibly AI). Should these technologies instead be used to augment human labor, perhaps there are less destructive applications of these tools. In other words, I don’t think there is something inherently bad about attempts to leverage technologies in an effort to help people and situations become better. In fact, I think such pursuits, done genuinely, are admirable! However, social problems will never have exclusively technological solutions and therefore thinking of solutions in which technologies augment – rather than replace – human labor and sociality seems to be a plausible way forward.
And that will be the hopeful note with which I end this Q&A!
