Reimagining Networks

An interview with Wendy Hui Kyong Chun

Affection suffuses the language of networks. Homophily, the axiom of “love as love of the same,” is the framework underlying what you see in your timelines and search results, what recommendations and ads appear, whether you get access to good rates for health care, whether you get released on bail. Attention, connections, and silently traced actions lead to the likelihoods of ties, discovered relationships, clusters, and “virtual neighborhoods.”

In networks, individuality gives way to “gated communities” governed by ties and correlations: who you’re like and who your likes are like. Our identities become simply node characteristics, because capture systems register actions and changes in state — motion that can be compared and correlated to others, that can bring us together with our virtual neighbors. This universe of potential relations gets called Big Data.

It’s not as pleasant as it sounds. “Networks work by leaking,” writes Wendy Hui Kyong Chun, by “making users vulnerable.” The constant exchange of traceable information, overwhelmingly without our knowledge or informed consent, is what makes interactivity possible. It makes liking, friending, recommending, following, and chatting possible. And those actions catalyze their own cascading captures, exchanges, and receptions.

Chun calls for a “reimagination of networks” that approaches vulnerability as the first condition, that doesn’t reduce memory to storage, that eschews ideals of privacy or security that look like enclosure and calls for public rights. In her work, she takes on the organizing principles of computational life: the dominance of homophily, the leakiness of friendship, the constant exchange that makes communication possible, the primacies of security and privacy, and the proliferation of networks.

I want to start with the subject of homophily. What is homophily, and what makes it feel urgent to you?

I came to the term while researching recommendation systems, and the effects that recommendations and network clustering have on social media. I was reading the major papers and textbooks on network science and investigating the technical details. What became clear was that at the heart of recommendation systems and social-media networks is the notion of homophily, the idea that similarity breeds connection. Crucially, not only was homophily considered to be a natural default, it was also justified and developed through examples of racial segregation, especially residential segregation.

My historical and theoretical research into the concept of homophily revealed that it was coined by [Paul F.] Lazarsfeld and [Robert K.] Merton in the late 1940s, when they were studying residential segregation in U.S. housing projects. They also coined the term heterophily. They stressed that, as a notion, “birds of a feather flock together” was not always true, and we needed to understand under what conditions it did and did not happen. It is fascinating that a concept put forward as a way to trouble the presumption that similarity automatically breeds connection has become a way to justify that same notion.

What is disturbing about homophily is not only that it has become normalized — we presume it's the only form of connection, thus erasing heterophily and indifference, which are key modes of connection — but also that it naturalizes racism and obscures institutional and economic factors. By personalizing discrimination, it transforms hatred into a form of love — a topic that Sara Ahmed has discussed in her work on white nationalism in The Cultural Politics of Emotion. And how do you show your love? By fleeing when others show up.

I’m based in Chicago, so I saw the exhibit that you collaborated on as part of the Chicago Architecture Biennial, Homophily: The Urban History of an Algorithm. I was struck by how homophily is a concept that starts with segregation in the real world. There’s a physicality to the concept that then gets translated into all of these networks and becomes physical again in how it manifests and renews discrimination.

Yes, I collaborated with Laura Kurgan’s group at the Center for Spatial Research at Columbia, which produced a fantastic exhibit on homophily and its legacy. As the archives make clear (the data for Lazarsfeld and Merton’s housing study was never published), homophily was always about trying to understand proximity, and the impact that proximity has on social engineering. The housing study focused on morale and public housing, for they thought that morale was key to understanding how public housing can foster (or not) democratic participation and citizenship. Tenant morale was key to any sort of social engineering (and they viewed social engineering as a good thing). This is why race mattered for them: Their question was, “How does race affect the participation of white residents in mixed-race housing?” And they viewed new friendships within the projects as proxies for the effects of social engineering.

How do you know the project is affecting people’s behavior? Because suddenly they have a new friendship, a new close friend that they didn’t have before. It’s interesting to think about online life and networks as not separate from physical life but rather premised on, extrapolated from, and modeled on physical interaction.

Can you talk about why you describe networks and algorithms as performative?

Absolutely. A lot of the hype around big data and a lot of machine-learning programs stems from their alleged predictive power. Basically, they argue that “based on the past, we can predict the future.” But not only do they predict the future, they often put the future in place. Their predictions are correct because they program the future.

Many researchers have made this point, such as Cathy O’Neil in Weapons of Math Destruction, Ruha Benjamin in Race After Technology, and, of course, Oscar Gandy Jr. much earlier in Coming to Terms with Chance.

Think of something like a risk-management system for credit. They’ll do a risk assessment of your credit based on your education, social networks, etc., and then they’ll give you credit or not — or give you credit at a certain interest rate. In effect, by denying you credit, they’re affecting what your future will be. But they say, “Well, this is based on past behavior,” or on people who are “like you.” What’s important, and this is something that Kieran Healy has also called the performativity of networks (although in a slightly different way), is that these networks put in place the future they predict. These predictions are treated as truth and then acted upon.

In contrast, consider global climate-change models — they too make predictions. They offer the most probable outcome based on past interactions. The point, however, isn’t to accept their predictions as truth but rather to work to make sure their predictions don’t come true. The idea is to show us the most likely future so we will create a different future.

But instead of exposing past errors in order to correct them, machine-learning programs often automate these mistakes. Think here of Amazon’s machine-learning hiring program, which was shown to be biased against women. If you had the word “women” anywhere in your CV, you lost points. Amazon stopped using it, but rather than stop using it, you could use the program as documentation of past discrimination, because it’s trained on all the past hiring decisions made by Amazon.

Machine learning does have a role to play. Even when it’s biased and completely faulty, it can reveal historical trends and biases. For instance, COMPAS — a software used by courts in several U.S. states to assess the risk of recidivism, and which has played a role in sentencing and parole decisions — discriminates against visible minorities. It’s based on and trained on data from the U.S. penal system. We should use that as historical evidence that the system is biased, to then try and change the system.

The problem becomes apparent when machine learning is used to automate the future, such that rather than past mistakes being learned from, these mistakes become embedded within the future.

How do we change the conversation from one of mitigating bias in algorithms to one of trying to prevent what they predict, or read the models in that way that you’re describing?

One thing is to change what we mean by model transparency. Momin M. Malik has done some great work on this. His argument is: When we talk about models and transparency, we should be referring not to whether or not we can understand the algorithm but to whether or not the model represents reality.

I think the work on mitigating bias within AI is certainly important, but it doesn’t address the fact that these models are actually doing what they’re supposed to do when they produce so-called biased results. If these results are by the definitions of machine learning correct, that’s a societal problem we need to address. When global climate-change models predict catastrophic climate events, we don’t say, let’s fix the model — we say, let’s fix the real-world problem.

You’ve written about “friending” and how it has been influential in social networks. How does the concept of friending reflect how we’re exposed online?

Friendship is always a potential risk. If you want a secret released, tell it to a friend. Friends can also turn into close enemies. Which doesn’t mean you don’t have friends, but it means that friendship is not a banal category. Further, as Jacques Derrida argues, traditionally, friendship is not reciprocal; liking somebody doesn’t mean they’re also going to like you. Friendship entails fundamentally putting yourself at risk and being vulnerable to somebody.

Intriguingly, “friending” on social networks is presumed to be reciprocal. You send a friend request and become friends when it’s accepted, so a friendship becomes a two-way connection. By friending someone, you’re establishing a relationship of trust. The idea is that if you say, “Only friends have access to your information,” you’re somehow more secure. Which is really strange because through this stage of trust or friending, people often give more information than they otherwise would in a public forum.

But at the same time, the back end is constantly reading your stuff. Facebook, Snapchat, WhatsApp, etc. constantly capture your actions, and you’re revealing more and more to the back end through this system of trust. You’re using a very public medium in a way that you think is private through this notion of friending. I think it’s really important to think about networks and media as public and as open and, from there, come up with a notion of public rights, engagement, and democratic action.

Is this constant communication what you define as “leakiness?”

Every form of communication is based on a fundamental leakiness. If you think about how your cell phone operates, it’s always sending out messages that anyone could theoretically access. Now, it is set up so only your cell phone can decrypt it, allegedly (although things can be hacked in various ways). If you think of your wireless card on your computer, what happens is that every wireless card within the vicinity of a network downloads network traffic, all network traffic, and then your wireless card deletes the packets that aren’t directly addressed to your computer. But it means that you all are constantly downloading each other’s traffic. In order for there to be communication there has to be this kind of openness and leakiness. So rather than assuming that the leak is an exception, we need to view it as a condition of possibility.

An article of yours that I really like is “Big Data as Drama.” What do you mean when you call us “characters in the drama of Big Data?”

By saying we’re characters in a drama called Big Data, I’m trying to understand user agency. A lot of the hype around Big Data imagines that all people tell the truth when they type in a search or that you reveal your true self via social media. To think of users as characters in a drama called Big Data is to insist that people, when using social media, carefully craft their identities and think before they post things to Instagram.

Think about how posed and scripted Instagram is. There’s a difference between an actor and a character: As an actor, you take on various characters, and there’s a gap between the two. What is interesting to me is that what the algorithms capture and emphasize can be different from what users value. Think of all the actions that aren’t captured or that aren’t monetized.

What’s key is to think through that gap and to enlarge it. And then to ask yourself: How do your actions not only affect the kinds of recommendations you get but affect each other? If you consider the ways in which we’re massively correlated to other people (if you like X, then you like Y, based on the actions of people who are deemed to be like you) that also means that what you like and how you act can affect the scripts and possibilities that somebody else receives. There’s a fundamental collectivity for characters that is important to engage and think through.

I want to ask about the Digital Democracies Group you founded, what your vision is for it, and what you are attempting to achieve.

The Digital Democracies Group is an interdisciplinary group that brings together the humanities, social science, the data sciences, and network sciences in order to take on problems that no one discipline or sector can solve.

One is the issue of mis- or disinformation. The way it’s usually discussed is, “Let’s just figure out what’s fake or what’s not fake, and if we know what’s correct somehow this problem will go away,” which completely erases the history of the news media itself — fake news preceded “real news.” Heidi Tworek’s book News from Germany exposes this history of so-called “fake news” really well.

A lot of people have shown that correcting a story isn’t enough and that corrections can backfire. Brendan Nyhan has shown that correcting anti-vaxxers can actually entrench their positions. Further, sometimes corrections spread far wider than the actual story, and pique interest in the fake news story that you’re trying to debunk.

Rather than looking at whether something is correct or incorrect — which is clearly important — we’re examining an item or interaction’s authenticity. The things that people find to be authentic go beyond the realm of what’s correct or incorrect. Think of watching a film or reading a novel and feeling that it really rings true to you, even though it’s fictional. Think of the ways in which the more certain politicians lie, the more authentic they appear. So, showing you that they’re lying doesn’t affect their perceived authenticity. The Digital Democracies Group is looking at authenticity and its impact on the circulation of information. Our question isn’t, “Is this correct or incorrect?” but rather, “Under what conditions (social, technical, cultural) do people find any piece of information to be true?”

That’s only possible by working across disciplines. We’re working with people in theater and performance studies because they have a very rich understanding of authenticity. This project entails not only intervening at the level of social media but also trying to understand the ways in which information is part of everyday life. It exceeds the screen, and if we want to take on something like misinformation or disinformation, we need to think at many different levels at once.

Part of the problem is the theory that technology can solve political or social problems for us. As soon as you start doing that, you give up on politics and you end up creating things that are far worse. That’s not to say technology doesn’t have an impact, but we need to fix this in a broader scheme.