
Tarleton Gillespie: This book is so many things, and I’m really excited that it’s out and can now be taken up in so many conversations – about the circuits of digital culture, about recommender systems and the politics of attention, about music and the evaluation of taste, about the particulars of information labor. But I want to start with a particular note you make in the introduction: “in algorithmic systems, these “cultural” details are technical details” (7). What does that mean, for you, and how does that principle guide your study of music recommender systems?
Nick Seaver: Thanks for the great question. This gets at the heart of what I’m trying to do in the book, and what I’ve argued for in most of my writing about algorithmic systems. There’s a prominent discourse out there that imagines “algorithms” as being essentially inhuman, pursuing some kind of uniquely technical, economic, or mathematical goals. Many people would suggest that this is what makes algorithms harmful to “culture”—all those antihuman logics chop culture into little bits, distort it, and feed it back to us in grotesque form. What I’ve argued is that this does not really account for how algorithms exist in practice: especially in the case of recommender systems like we see in the book, there are humans all over the place deciding how things work, choosing between different options, setting goals, and actively worrying about their cultural role. In contemporary software development, algorithms change all the time as a result of these interventions, and these interventions are shaped by all sorts of “non-technical” ideas held by people. So, as I write in the book, “the way an engineering team thinks about their work is as significant to the functioning of the system as the structure of a sorting algorithm. If we tried to look past all the people to find the algorithm, we’d miss a crucial part of how and why these systems come to work as they do” (7).
So, that idea is what motivates the rest of the book: If the cultural theorizing of engineers matters to how their algorithms function, then what’s in that cultural theorizing? The following chapters work through some of it, talking about how my interlocutors thought about listeners, genre, musical sound, their own influence on the world, and so on.
Tarleton Gillespie: Well, then I want to know what’s in the cultural theorizing of these engineers, around one concept in particular, the one you chose for the title of the book: “taste”. What do the engineers who serve up my music streams think taste is when you ask them, and what does it seem to mean to them when they implement it into their recommendation systems?
Nick Seaver: This is something I only really figured out as I was pulling together the last stages of the book. I had gone into the field with a pretty simple idea about what I was looking for: people building these systems to model taste must have some theory of taste, and that would influence what they built. Or maybe, the tools available to them would influence their theories of taste. So, for example, if you think that people like the music they like because of how it sounds, then it would make sense to build a recommender that relied on audio data; if you thought that people liked the music their friends like, then you might do something with a social network. But I never really found anything that straightforward. I asked everyone I interviewed, point blank, “Why do you think people like the music they like?” And no one really had an answer; usually, they just laughed and said something like “Who knows?”
There’s a bit about this in the book, where I realize that what’s happening isn’t really the building of a particular theory of taste into infrastructure, but an effort to produce what I describe as an open plan: a kind of infrastructure meant to facilitate growth and exploration, which tries to be rather agnostic about the reasons why a person might like what they like. But, as we’ve seen in other domains, efforts to be open always have their limits; they facilitate some actions and not others. And this openness ended up manifesting as a hugely omnivorous data collection apparatus: Who knows what signal might end up being useful? Might as well collect everything. The more open the plan, the wider the net.
Tarleton Gillespie: This open plan approach definitely resonates with other instances of data science and machine learning — visualizing the entire corpus of content or the entire collection of data points about user preferences as a vast, flat landscape to be wandered. But that doesn’t sound much like how taste works, which by definition implies discernment and evaluation, i.e. not everything, and it doesn’t sound much like recommendation, which generally is a this-not-that offer. I imagine they see themselves as building an open plan such that each listener, millions of listeners, can find a path through it that’s to their own taste. But still, that sure does downplay their agency, very much gets them out of the way. What is it that they are attempting to do with their recommender systems? Who are they imagining they’re doing it for?
Nick Seaver: There’s this tension at the heart of algorithmic recommendation: it’s predicated on the idea that you might like anything (certainly more than you realize right now), but that you won’t like everything. So recommender systems are always caught up in this paradoxical position of trying to help people “break out of their boxes” (as it’s often put) while also profiling them to figure out where they might want to go once they’re out.
Now that’s a very sympathetic picture of what recommender systems and their developers are doing. Most critics would focus on the second part: recommender systems profile you, they box you in, they limit your view of that otherwise open cultural space. It’s not too hard to find arguments that these systems are about cultivating certain kinds of desire in users—to make a streaming music catalog more valuable by getting people to listen more (to more stuff, for a longer time). And that’s not wrong, but it’s only half of the story. A lot of the book is dedicated to describing how people working on these systems navigate that tension, between facilitating exploration and modeling preference.
I think the “this-not-that” reading of recommendation makes a lot of sense in the present moment, where recommenders are really everywhere and can meaningfully shape the entirety of what you encounter on a given platform. But it’s easy to forget that not so long ago, algorithmic recommendation was this little side feature—a thing in the corner that was only ever going to be a supplement to whatever else a user was doing. In the early days of these systems, in the mid-1990s, they were quite explicitly aimed at people who were already enthusiastic about music, willing to mess with new technologies, and happy to fill out lots of ratings. But since then, music recommendation (like most other kinds of recommendation) has shifted to target less enthusiastic listeners, to encourage them to listen more. This has happened at the same time that recommender systems have started to envelop platforms, such that people can start to talk about “Facebook” or “Spotify” or “Netflix” as almost synonymous with their recommendation algorithms.
Over the last decade or so, there’s been a major shift in the power of recommendations to really exclude stuff; we like to think of these systems as being widely influential, but (if that’s true) it’s only fairly recent. At the end of the book there’s a epilogue where I interview one of the major figures from the field, and this change comes up: I don’t think many people working on recommender technology were really prepared to succeed to the degree that they have.
Tarleton Gillespie: This idea of the open plan, this sense that recommendation was a curious add-on, both fit with a sense of modesty you note in the way these designers describe their role – often using pastoral metaphors, they characterize themselves as “park rangers,” “farmers,” “gardeners,” and occasionally “bushwackers… making this overgrown musical jungle navigable.” What do you think of this modesty? Do you think they are right to diminish their role, less powerful than public worries might suggest? Or do they need to tell themselves this, because they are embodying a powerful role in the current cultural landscape?
Nick Seaver: I think this is a really important question that we can’t entirely answer from an ethnographic study of this particular group. People often ask me about the consequences of these systems for music, listening, and culture more broadly; my first somewhat negative popular review of the book focused on precisely this issue and how it’s not addressed in there! But there’s a good reason for this: the kinds of cultural consequences people are worried about are big and broad and multifarious. We need many studies, in many different domains, to start to suss them out. I think a lot of people have an intuition that these systems are obviously influential, in obvious ways—but we shouldn’t assume that the goals of engineers translate neatly into broad cultural consequences, good or bad. We’re starting to see more empirical research on things like how musicians and record labels think about making music in a world full of recommender systems, how listeners think about all the designs on their attention, and the knock-on effects of widespread personalization. And the results are really mixed! In some cases, filter bubble-like effects don’t seem to happen; in other places, musicians try to adapt their style to “the algorithm,” but can only do so through mental models that may have very little to do with what’s going on under the hood.
So I don’t want to let my interlocutors in the book off the hook, nor do I want to assume that they are supremely powerful actors. Like everyone else, they occupy positions that enable and constrain them, their plans do not always work out, and while they certainly attempt to influence others, their influence is not guaranteed. That can sound like a cop-out answer, but I’ve long worried about a style of technology critique that artificially inflates its objects so as to increase its own importance. I think it’s a mistake to figure these people as powerful men behind the curtain. That kind of move does not help us understand our situation, and it takes us away from the boring-but-important empirical work we could be doing instead.
My own modest effort here is fairly constrained to documenting, as adequately as I can, how this particular group of people thought about their work at a particular moment in time. The chapter on all these gardening and farming metaphors, for instance, takes a rather contrarian angle in the discourse on data metaphors: instead of explaining how metaphors like these naturalize the work of engineers and convince a non-expert public that machine learning is all objective and organic, I try to figure out why people use these metaphors even among themselves, to try and make sense of what they’re doing. My takeaway is that pastoral metaphors usefully index a kind of bounded control, where the objects of one’s labor can be lively and surprising, despite the organizing role of the worker. This is less critical in a narrow sense, but I think it helps us to understand the appeal of these ways of thinking to their thinkers, which is part of the overall puzzle we’re living through now.
I hope this approach I take throughout the book makes it useful to folks who want to make more grounded critiques of this industry, as well as to people working in this space who may want to step back and think about some of the deep assumptions that underlie their work. Whether or not we imagine these systems as super-influential, there is a lot of work to do.







