Robert W. Gehl and Sean T. Lawson on Social Engineering

Interview by Emma Briant

https://mitpress.mit.edu/books/social-engineering

Emma Briant: Cambridge Analytica often pitched their methods as a contemporary digital marketing firm – if you were sat next to Alexander Nix at a dinner party, how would you convince him your book explains what’s really going on?

Robert W. Gehl: First of all, should we even trust that the dinner party is actually not a cover for some nefarious Nix scheme? Wasn’t Nix famous for such tricks? I’d be worried about what Nix is trying to convince me of! And given his duplicity – such as his claim to a German audience in 2017 that Cambridge Analytica got their data through legitimate means – is it even possible to convince him of anything?

Sean T. Lawson: I think I’d start by agreeing with the general premise: Cambridge Analytica was a contemporary digital marketing firm, not unlike many others. And therein lies the issue. The unique combination of big data, machine learning, and microtargeted messages that CA and so many others are adopting is an innovation, but one with potentially negative impacts.

Robert W. Gehl: Agreed. I think I would point to your work, Emma, among others –   whistleblowers such as Kaiser and Wylie, journalists such as Cadwalldr. Based on that work, we can say we already know what’s going on. What we need to do is further theorize it and look for points of pressure to fix the problems of manipulative communication. I think that’s the approach that all of us – you, us, people like Joan Donovan – are taking now.

Sean T. Lawson: I also don’t know that we’d claim that Social Engineeering accounts for the entirety of what’s “really going on.” What we’re doing is offering another way of thinking about what’s going on, one that we hope helps to connect the disparate contemporary pieces – like CA, but also Russian interference operations – with one another and with a broader, historical context.

Emma Briant: You post the question of how “crowdmasters, phreaks, hackers and trolls created” a new form of manipulative communication.  But how new is it really? You trace a long history, so what is new and distinct in what you see today?

Sean T. Lawson: Trying to sort out what is new and unique in the contemporary moment was one of our main goals for the book. As you note, many of the practices we see today have been part of public relations, propaganda, marketing, or hacking for a long time. Part of our frustration was that so much of the current discourse treats what’s happening with Cambridge Analytica or the Russians as unprecedented. It’s not. But it’s also not just “the same old thing,” either. So, we think what’s new here is, first, the unique combination of new methods of data gathering, analysis, and targeting that have the goal of allowing social engineers to have societal-level impacts by engaging people at the individual or small-group level. Second, we think the ability to shift fluidly and quickly between the interpersonal and mass forms of social engineering in a given campaign is also an innovation.

Robert W. Gehl: Right – that fusion of interpersonal con artistry techniques with mass societal engineering desires is what’s new. That’s what we mean by “masspersonal social engineering.” This concept is a subset of recent “masspersonal communication” theory that suggests that the old mass/interpersonal divide means far less in the digital age. When a tweet @ someone can also be a public performance in front of thousands, and when an interpersonal interaction could be recorded and posted online, it’s harder and harder to draw the line between interpersonal and mass communication. Likewise, it’s hard to draw the line between interpersonal con artistry and mass propaganda.

Emma Briant: Scholars use a lot of different terms to describe the practices your book aims to help us understand.  You avoid using the term contemporary propaganda or influence and instead discuss manipulative communication.  Can you explain your choices? 

Robert W. Gehl: We’re reacting somewhat to conceptual confusion happening right now. For example, Benkler, Faris, and Roberts’s excellent book Network Propagandadoes a fine job tracing how disinformation flows through right-wing media, but one thing they do is specifically bracket off interpersonal con artistry as not-propaganda. This makes sense due to the history of propaganda, but not as much sense when we start to think how manipulation might be both highly targeted at individuals at one moment, and then scaled up to a population level the next moment.

Sean T. Lawson: We chose “social engineering” over those other terms for a couple of reasons. First, we wanted one term to cover this combination of both the interpersonal and mass forms of manipulation that we were seeing come together. A few other scholars and security researchers had floated the term “social engineering,” which we thought was clever. But the more we talked about it, the more we became convinced that it wasn’t just a clever one-off but that there was really something to the use of that term that not only described both aspects of what we were seeing but did so in a new and interesting way. Second, we also felt like propaganda or influence did not fully account for the active and malicious manipulation of what we were trying to describe. Propaganda, to us, implies spreading biased information promoting one’s own side. Influence, though it can be malicious, is ubiquitous and mostly innocuous. We think that social engineering better captures the active attempt at using communication to manipulate a target in a malicious way.

Robert W. Gehl: As for other terms, etymologically, “influence” comes from astrology, meaning a fluid coming from the stars that shapes our lives. I personally prefer “manipulation” – coming from the late Latin manipulare, leading by the hand. I think of it as a more human-centric and less metaphysical capacity to shape environments. This links up well with “social engineering” – engineering comes from gin, a trap, net, or snare (including verbal traps and snares) and reflects the desire to practically apply knowledge to shape a situation.

Emma Briant: Your approach bringing together histories of hacking and deception is truly original.  Was there a Eureka moment when you felt this idea coming together? Can you explain how the idea emerged from each of your work and unique perspectives coming together?

Sean T. Lawson: At the time, both of us were working together at the University of Utah and talked about our respective projects regularly. Rob had just published a book about the dark web and was well into a new project originally just focused on the history of hacker social engineering. I was just finishing up a book on cybersecurity discourse in the United States. At the end of that book, I argued that political warfare, propaganda, and disinformation were more the reality of cyber conflict than the as-yet hypothetical “cyber Pearl Harbor” infrastructure attacks that get so much attention. But the attempt at precise targeting and en masse seemed new and my initial efforts to explain it using analogies to the so-called “precision revolution” and airpower in the U.S. military felt unsatisfactory.

Robert W. Gehl: For me, I have long been intrigued by the concept of engineering – as I mentioned above, it contains the term gin (snare, trap). I was initially thinking of looking at the genealogy of software engineering – I had been collecting material on that since my first book. But in the course of writing Weaving the Dark Web, I came across people talking in hacker forums about “social engineering” and was hooked. I thought of social engineering to be a pejorative term for government programs, not as a fancy way of saying con artistry. So I dug into hacker social engineering. My initial plan was to write about hacker social engineering only, but in conversations with Sean, he convinced me that the more important move would be to trace both the older social engineering of the early 20th century with the hacker conception. In turn, I convinced him to join the project and bring his cyberwar discourse expertise to bear on it.

Sean T. Lawson: So, really through ongoing discussion of our respective projects at the time, we came to realize that there was overlap between hacking and propaganda techniques. As we looked around, we didn’t see anyone else taking that approach, so we decided to see what would happen if we did. We think that the result is a valuable new way of thinking about the relationships between these practices.

Emma Briant: How optimistic are you about our future in this age of masspersonal social engineering? Can we escape its grasp long enough to hack the system and build something better?

Robert W. Gehl: I love this question, because my current research addresses it head-on. I’ve been a longstanding advocate of quitting corporate social media, since its whole purpose is to have us produce ourselves as consumers through profiling and then deliver our profiled selves to advertisers. Facebook/Meta makes for a fine vehicle for masspersonal social engineering! But instead of advocating quitting social media – since it does give people a great deal of pleasure – I’ve been studying ethical alternatives, like Mastodon. If we’re looking for people who are hacking the system, look to the people coding Mastodon and the rest of the fediverse. You’ll note rather quickly that targeted advertising is not part of that system! And that helps make is less attractive to the sort of manipulations we’re talking about.

Sean T. Lawson: We can hack the system to make something better. We talk about options for that at the end of the book. However, I’m not optimistic that we will. That is what is so frustrating about this situation. Sensible privacy and data collection regulations would go a long way to thwarting the use of big data in masspersonal social engineering. Addressing the problem of dark money and front groups in politics is also essential. Doing more to shore up our cybersecurity to make the penetration and theft of information that can subsequently be weaponized is also possible and essential. Unfortunately, at the moment, we’re not seeing nearly enough progress in each of these areas. Too many actors, from corporations to social media platforms to marketing firms to politicians who rely on them have an interest in continuing to allow masspersonal social engineering to take place.

Robert W. Gehl: We joke that Sean is Eeyore. But I am afraid he’s right. I’m looking at home-grown, ethical FOSS solutions but they are not going to do the job on their own. Global regulations of what you, Emma, aptly call the “influence industry” have to happen.

Noelle Molé Liston on her book, The Truth Society

The Truth Society

Interview by Jonah Rubin

https://www.cornellpress.cornell.edu/book/9781501750793/the-truth-society/

Jonah Rubin: So much has been written about what media literacy scholars have termed our “information disorder” (Wardle and Derakhshan 2017). There is a general recognition that mis- and disinformation is rampant, threatening the shared epistemological foundations of many political communities. In The Truth Society, though, you don’t only focus on those who believe in conspiracy theories and misinformation. You see them as intimately connected with those who are “hyperinvested” in real science and reliable facts. Even in your opening vignette you focus not on the purveyors of untruths, but rather on the “soldiers of rationality” breaking mirrors in the public square as an anti-superstition performance. Why is it important (methodologically, theoretically, or both) for us to focus on what you call this “hyperrationalization,” an almost desperate pursuit of science, empiricism, and skepticism as well? Put otherwise, what do we miss when we exclusively train our analytic gaze on those who believe in false information?

Noelle Molé Liston:  

Danger #1: We miss not fully recognizing the implications of the onslaught of information and algorithmic underpinnings of digital consumption. Access to information has meant both a democratization of access to information, that is, at least in basic sense along with the intensified customization and subsequent siloing of information. Yet with so much information, what became eroded was a basic sense of source appraisal, especially when industries were mastering pseudoscientific ways to uphold the oil and gas industry, and various “Big” Industries like Big Pharma, Big Sugar, Big Food, etc. Therefore, we saw that scientific information also became a site of “fake news” where under the guise of science, social actors might find so-called scientific studies to support the absence of climate change or the incredible health benefits of sugar. Thus, social actors who were already habituated to see science as true, rational, and reasonable, were being seduced with paid-for sponsored “fake” science, which I would argue, worked because of this enchantment of good science, a kind of facile belief that scientific rationality could seek and produce the truth. Without interrogating or becoming skeptical about why knowledge under the label of “science” can either be manipulated or outright fabricated, we fall prey to reifying–or even deifying–science itself.

Danger #2: We become “anesthetized” to our own embodied or phenomenological ways of knowing the world and, possibly, one’s own expertise. In my exploration of the 2009 earthquake in L’Aquila, one interlocutor was an architect with deep knowledge of both the territory and engineering. Yet she ignored her own sense of danger when she was reassured by scientists and fellow engineers, likening it to “numbing” her other ways of knowing she had typically relied upon. Why does this happen? Immense pressure was on the public to trust science and quell the existing pre-earthquake panic, not unlike ways the American public is currently scolded to “trust the science” during the pandemic. In both instances, maintaining social order relies on a continual investment in science as simple truth, so we have to interrogate how our own trust in science might be part of a mode of control and surveillance. It’s also become quite tricky on the left, as we want to allow space for healthy scientific skepticism but not indulge pseudoscience or conspiracies. If we fail to examine our own tendencies to exalt science as “big-T” True, then we risk reproducing the very regime of hyperinvestment in science.

Jonah Rubin: One major theme of the text is the ways that new media technologies – television news, the internet, algorithms – reshape the possibilities for politics. At times, it seems like these technologies open up new possibilities for political participation and populism. Other times, it seems like the technologies become new sites for the political imagination, such as the televised presentation of a golden tapir award or the infozombies. Could you elaborate a bit on how you see the relationship between new media and politics playing out?

Noelle Molé Liston: The book ends with the metaphor of the mirrored window world where the media consumers believe themselves to be accessing information openly and outwardly, but instead are regurgitated only algorithmic versions of information already catered to their likes, dislikes, and political leanings. In some ways, it seems important that the originality and bite of Italy’s “Striscia La Notizia” (The News is Spreading) satirical news show and Beppe Grillo’s blog were not data-driven media, that is, there was no predetermined customization or data-mining precipitating their creation, other than usual television ratings or blog views. But they were both examples of important political interventions: the former to critique then Prime Minister Silvio Berlusconi, the latter to build a grassroots political movement of the Five Star Movement. Plus, both were art forms insofar as they were driven by the humor and creativity of political actors, navigating a climate of television censorship by the very man they were poking fun at (Berlusconi), on the one hand, and engaging in-person political action vis-a-vis online content. By contrast, the more algorithmic media such as the Five Star Movement’s web portal program known as Rousseau, might mine member data and ideas in ways unknown to users. Perhaps not surprisingly, Luigi Di Maio, one of Five Star’s leaders who served as Co-Deputy Prime Minister, was accused of being a kind of android: chosen and styled according to user preferences from Rousseau. Put differently, the algorithmic regimes of new media fundamentally operate in ways that are buried and opaque to most users, which, in turn, makes it harder to discern their effects. More recently, Italy has also seen a quite intense anti-vax movement during the pandemic, which is likely the product of social media algorithms which funnel users into their own individualized information silos and boost posts because of “engagement” not veracity. The outside world can look so deceptively beautiful even though the forest is just our own faces. Twenty-first century idiom: you can’t see the faces through the forest? 

Jonah Rubin: To me personally, one of the most productively challenging aspects of your book is your analysis of political humor. Many of us see political humor as inherently subversive, a way of mocking and therefore undermining the powerful. But in your book, especially in chapters 1 and 3, you suggest that humor can also breed a kind of political cynicism and even complicity that may reinforce hegemonic power structures. Could you tell us a bit more about your approach to the politics of humor? How should anthropologists and others study the politics of jokes, both the kinds we find funny and the kinds that may horrify us?

Noelle Molé Liston: It is true that I tend to adopt a cynical approach to political humor as my recent analysis of viral videos of Italian mayors funnily admonishing citizens to mask up effectively got laughs because they represented the “thwarted displacement of patriarchy and failed attempts, at every level of governance, to erect robust authoritarian control.” Part of this analytic, of course, is aligned with anthropological investigation of humor and satire as unintentionally bolstering entrenched power structures. It’s hard not to be so cynical when Berlusconi himself, whom I’ve argued won over Italians with his brash humor as a seemingly genuine presentation of an otherwise highly artificial political masquerader, almost had a comeback in his 2022 run for President of the Republic. (He ended up dropping out of the race). I remained concerned that “strong men” like Berlusconi and  Trump use humor to manipulate and garner support, and diffuse criticism, resulting in a destabilization of how political humorists can have, as it were, the last laugh. How can political humor outsmart the absurdist clown? How will algorithmically filtered media consumption (re)shape the humor of consumers and voters?

One strategy might be in the satire of unlikely embodiment. One of Berlusconi’s best impersonators is Sabina Guzzanti, a woman 25-years Berlsconi’s junior who dons wigs and prosthetics to perfect her impression. In this clip, she refers to herself as a woman while in costume as Berlusconi, decrying that “her” failed bid for President was sexist. She is also one of his fiercest critics, especially in her account of his post-earthquake crisis management in L’Aquila. In the US, Sarah Cooper went viral for her videos in which she would lip synch to things Trump said without physically altering her appearance, so it was his voice, but her mouth, face, and body. In both cases, the performances do different kinds of political and symbolic work because of the surprisingly mismatched genders and bodies, which results in tension, long a characteristic of great humor, between laughter and discomfort. There is something powerful and effective about the dissonance of having women, and in Cooper’s case, a woman of color, speak the words of these misogynists that straight jokes and impressions (often by straight men) cannot achieve.

Jonah Rubin: In Chapters 5 and 6, you look at the tensions between scientific prediction and public governance, particularly in the Anthropocene. Here, you focus on the trial of a group of scientists who provide reassurances to the residents of L’Aquila, urging them to stay home prior to what ultimately turns out to be a deadly earthquake. At the time, the international press overwhelmingly presented this story as an irrational attack on science. But you see it as a more profound commentary on the changing role of science in the public sphere. How do shifts in our experience of the climate and in our media landscapes affect the relationship between science, politics, and law?

Noelle Molé Liston: Indeed I argue that the idea that Italians gullibly believe scientists could predict earthquakes helped export the narrative of the trial as anti-science. However, it was more about the form of the press conference which positioned authoritative scientific reassurances to quell the rising panic in L’Aquila. As we confront more acute environmental crises, there will be greater pressure on scientists to predict and manage risk, ascertain damage and crisis management, and weigh in on policy, as we’ve seen during the pandemic. But we don’t yet have a legal framework that might hold non-scientific or even scientific failures accountable. In L’Aquila, the actual non-scientific claim was their reassurance that no big earthquake would come and people should not evacuate. The Italian judiciary tried to make a connection between an authoritative sources’ public utterances and the fatal consequence of this misinformation: the listeners’ death. By contrast, Trump alone issued massive amounts of misinformation about the dangers of Covid but it would be entirely inconceivable, I think, that our judiciary hold him accountable for the resulting deaths.  Why? Because of Trump-era politics or the American justice system, more broadly? How –and where–will the law shift to see misinformation as fatally consequential, legally punishable, or, at least, a public health crisis?

Jonah Rubin: The book centers on the ways fact and fiction are blurred during the Berlusconi and Grillo eras of Italian politics. But to me, the students in my class who read this text, and, I assume, many other readers in the United States will see strong resonances between what you describe in Italy and our own country’s experiences with President Trump and the “post-truth” era that American frequently associated with his rise to power. Of course, the balance between ethnographic specificity and comparison is as old as anthropology itself. With that caveat in mind: What do you see in your argument as particular to Italy and how do you think it applies to our recent and current experiences in the United States and around the world as well?

Noelle Molé Liston: Berlusconi as a precursor to Trump was a kind of historical instruction booklet. I often describe Berlusconi to Americans as “Trump + Bloomberg,” because Berlusconi owned his own media company, which was vital in his 1990s rise to power. However, the structure of Italian television broadcasting, where political parties divvied up news networks, was crucial in the mid-century rise of public cynicism towards television and print media. Ultimately, it sharpened a widely shared Italian assumption that any form of news has a political slant (decades before we were talking about spin). Put simply, the infrastructure of media matters. Most agree that Trump was tipped over the edge not by television masterminding like Berlusconi but because of social media manipulation. To be sure, Italy’s technopopulism of the Five Star Movement will likely expand in the US. Still, Italy seems to have suffered far less democratic backsliding than the United States, and maybe we could read this as hopeful premonition. But the US seems to have “left the chat” in terms of Italian influence. Italy’s fundamental rootedness in its unique mix of Catholic ethics, democratic social welfare, and pro-labor movements has somehow put voting rights and democratic institutions in a better place with respect to the US,with our Protestant ethic, anti-labor, and neoliberal demonization of welfare. Let’s also notice that there was no January 6th in Italy, no violent insurrection of people believing an election was stolen. Despite Italy’s international reputation as an “unstable democracy,” I’d feel more confident about proper counting of votes and the peaceful transfer of power in Italy than I would in our next US presidential election.

Jonah Rubin: This is less of a major point in your book, but on page 88, you note a common thread of political cynicism that ties together the foundational principles of media criticism – and, I would hasten to add, most media literacy education initiatives too  – and the Five Star Movement’s attacks on science and news media, which find clear echoes in other right-wing populist movements around the globe. How do you think media criticism and education might be contributing to the deterioration of a shared epistemology? And might you have any suggestions on how media criticism and education need to shift in response to populist political movements which sound disturbingly close to their foundational insights? 

Noelle Molé Liston: Just as the left needs to thread the needle in terms of science as I mentioned earlier, so, too, do well- intentioned plans for media literacy begin to sound like Trump’s refrain of referring to The New York Times as “fake news.” The problem is that people have lost an ability to discern between positioned information and misinformation. Plus, our regime of algorithmic newsfeeds just can’t handle the nuance.  Just as postmodern theories of multiple, contested truth escalated to, or were perceived as, forms of absolute moral nihilism, so, too, has some healthy epistemological skepticism quickly become a truth vacuum. It reminds me of a moment when my student said she found statistics about police violence against Blacks on a white supremacist website. Why? I asked, horrified. She said it was because she trusted their information precisely because their anti-Black positionality was overt.  We find this same mentality that makes Trump, one of the biggest liars and con-artists on the world’s stage, become trusted by his followers: the trust in manifest and easily legible opinions. What results, then, is greater trust in overtly positioned information sources like Fox News, and lesser trust in the vast majority of information mediums where the positionality is more complex, covert or semi-covert, and paradoxical. Media criticism and education needs to instruct on literacy practices where knowledge might be seen as positioned and framed, and move away from a binary of biased versus unbiased media. We also need to raise awareness about how information is algorithmically processed and customized to the user. Finally, we must recognize how corporate media regimes deliberately design decoys (eg. clickbait, customized ads) and mechanisms of addiction  (eg. the no-refresh scroll function, TikTok) to keep data surveillance and its vast monetization for the benefit of astonishingly few people in place.