Emanuel Moss takes the Page 99 test

On the 99th page of my dissertation, “The Objective Function: Science and Society in the Age of Machine Intelligence”, the first full paragraph describes how machine learning researchers go about deciding which projects they should work on. Given that the dissertation, as a whole, is concerned with understanding how machine intelligence—data-driven algorithmic techniques like machine learning, data science, and artificial intelligence—produces knowledge and constructs the authority it holds as it moves across a wide range of domains, this page seems a particularly apt representation of the whole. This page also gestures at the idiosyncrasies of fieldwork that allowed me to address these concerns, as the research lab I had found myself working in was particularly reflexive about the work they did. Without the reflexivity of my interlocutors, many of the social processes I was interested in would have likely remained hidden from my view, or at least been much more difficult to access ethnographically. The paragraph in question reads:

“In the sense that most work in machine intelligence involves applying a well-suited technique to a well-posed problem, the exercise these applied machine learning researchers undertook for their research report was merely a highly reflexive version of how machine intelligence is applied generally. Inspired by the horizontal organizational structures common in the technology industry (Scott 1975; Pfeffer and Leblebici 1977), researchers at OTH shared responsibilities and steered many business decisions through collaboration and consensus. The topic-selection process at the lab was emblematic of this. It was run by the researcher whose “turn” it was to write the next report and involved a series of whiteboard exercises in which all lab members would nominate candidates, research their potential as a topic, and then present their favored candidates to the entire team, who would then collectively winnow down the candidates to a few finalists. The report author would ultimately select the topic. The reflexivity of the process these applied machine learning researchers engaged in offers a valuable window into questions that go un-asked and assumptions that go un-examined in more mundane, less research-oriented applications of machine intelligence. The research report I participated in addressed a technique called “multi-task learning” that was first described in the late 1990s (Caruana 1997), but the technique rose to the top of the topic-selection process because it had recently found applications in industry (see McCann et al. 2018), and was more easily integrated with newer machine learning programming packages.”

What this paragraph reveals, for me at least, is that the production of knowledge through machine learning is a collaborative, practice-based process. To fully understand the role it plays in reshaping social worlds requires seeing the many ways it is shaped by organizational prerogatives and industry-specific ways of structuring work practices. But machine intelligence is also shaped by the questions researchers choose to ask, and—by extension—the questions they do not. These situated knowledges (Haraway 1988, Rouse 2002, Katell et al. 2020), I eventually conclude, contribute to the overall conservatism of machine intelligence. Contrary to the trappings they carry in popular imaginaries of technological advancement and futuristic automation, machine intelligence conserves the power of already-existing institutions and it reinscribes social relations of the past into the present and future. Machine intelligence may, on the surface, threaten the authority of powerful actors like judges, doctors, or loan officers by way of automation. But, in reality, machine intelligence preserves the power of the institutions — the courts, hospitals, and banks — that confer authority upon those actors. This is because, in selecting problems, and in addressing them, the work of machine intelligence is to ask questions that extend power, and not to challenge it.

Caruana, Rich. 1997. “Multitask Learning.” Machine Learning 28 (1): 41–75.

Haraway, Donna J. 1988. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies 14 (3): 575. https://doi.org/10.2307/3178066.

Michael Katell, Meg Young, Dharma Dailey, Bernease Herman, Vivian Guetler, Aaron Tam, Corinne Binz, Daniella Raz, and P. M. Krafft. 2020. Toward Situated Interventions for Algorithmic Equity: Lessons from the Field. In Conference on Fairness, Accountability, and Transparency (FAT* ’20), January 27–30, 2020, Barcelona, Spain. ACM, New York, NY, USA. https://doi.org/10.1145/3351095.3372874.

McCann, Bryan, Nitish Shirish Keskar, Caiming Xiong, and Richard Socher. 2018. “The Natural Language Decathlon: Multitask Learning as Question Answering.” ArXiv:1806.08730 [Cs, Stat], June. http://arxiv.org/abs/1806.08730.

Pfeffer, Jeffrey, and Husayin Leblebici. 1977. “Information Technology and Organizational Structure.” The Pacific Sociological Review 20 (2): 241–61.

Rouse, Joseph. 2002. How Scientific Practices Matter: Reclaiming Philosophical Naturalism. Chicago: University of Chicago Press.

Scott, W. Richard. 1975. “Organizational Structure.” Annual Review of Sociology 1 (1): 1–20.