facebook twitter tumblr newsletter
Marginal Utility
By Rob Horning
A blog about consumerism, technology and ideology.
rss feed

The overload

rob in light

This New York article by Casey Johnston about the death of the chronological feed colors within the lines of these sorts of pieces: It takes for granted that people suffer from information overload as if it is some sort of act of god, and that algorithmic curation is therefore an inevitable and necessary attempt to fix the problem. Users are treated as incapable of curating their own feeds, because they are either too lazy, too passive, or too indiscriminate — presumably users follow or friend people whose posts they have no interest in seeing out of politeness or an intent to curry favor with them, and then they end up inundated. As Johnston writes, “It’s difficult for users to adequately curate their own feeds. Most people just follow their friends.“

That claim concedes too much, I think, to social-media-company ideology about how platforms are supposed to be used. What makes such personal curation difficult is not the effort required in doing it (make a Twitter list, start a Facebook group), but the effort it takes to overcome everyone assuming and insisting that it is so so difficult.

Platforms like to promulgate the idea that users are not inclined to decide for themselves what they want, and that they are instead eager to be persuaded and served things they haven’t chosen, like ads. Not only can’t we curate our information feeds, but we can’t curate our personal desires, so we welcome ads and algorithms to solve the overwhelming problem for us.

Johnston acknowledges that maybe companies shouldn’t be trusted to do this sorting and don’t have our interests in mind, but then basically shrugs: “It’s an understandable fear. But, well, that ship has sailed.” We should just give up and roll with it, apparently. Sweet surrender.

Using social media that implements an algorithmically curated feed reinforces for users that they shouldn’t be expected to deliberate over any desires or guide their own information-search processes. Such platforms teach users helplessness. Staging information overload deliberately helps with the lessons. The point is to make the surrender pleasurable, as Will Davies suggests here. As with the “sublime” in aesthetic theory, we are overloaded with information so that we can enjoy being overpowered.

That is why platforms have always tried to saturate users with information and encourage them to constantly add more people and services to their feeds. The overload is intentional. Overload is the point, just like “too many channels (and nothin’ on)” is the whole point of having cable. Social media platforms foreground the metrics that drive overload, opting people in when possible and encouraging them to friend and follow everyone and everything they can.

Such promiscuity leads to the kinds of “context collapse” that companies are invoking to explain why users are posting less. But clearly the platforms prefer “context collapse” to communication. Their business model relies on having a lot of users spending a lot of time on the site, not necessarily on users posting a lot about themselves. Context collapse may make users post less, but it also generates a prurience about what others post; it salts all posts with a sense of risk that makes them more compelling. It also orients users toward consumption rather than production; or rather, it encourages them to limit their own “prosumption” to safe practices — sharing links to signal their own identity, endorsing other people’s content with likes, and so on.

This suits social media platforms just fine; the more programmatic your engagement is with their platform, the better. Ideally you watch your feed like television. Just as algorithmic sorting is posited as something users demand to deal with information overload (when really it allows platforms to serve ads in with content), “context collapse” is deployed to make it seem like users’ sinking into passivity is their own fault and not the platform’s — and meanwhile social media follow the path of all previous mass-media technologies, toward emphasizing the few broadcasting to the many.

We’re supposed to believe that users posting less constitutes some sort of threat to Facebook: If we stop posting, they won’t have as much data about users to use to target ads better. But that is not necessarily the case: Facebook gets the data it needs about users by spying on their browsing activity and keeping track of their likes and other sorts of non-posting behavior. The chief thing that user posts are good for, from the platform’s point of view, is keeping other people engaged with the site.

But a site that is made up only of friends talking to friends is an uncomfortable place to serve ads — the primary business of Facebook. (It doesn’t exist primarily to facilitate connection or even data collection on individuals; those are subordinate to gluing eyes to screens and guaranteeing they see ads.) Hence Facebook seeks a blend of friend-to-friend recognition (the social glue that makes checking Facebook nearly mandatory) with the ordinary sort of culture-industry product that we are well-accustomed to seeing ads with — the sort of content that people typically link to and share, the “quality” content that Facebook optimizes its feed (with constant tinkering and rejiggering) to prioritize.

In re-sorting users feeds, however, feed-curation algorithms aren’t trying to solve information overload; they are hoping to prolong it and make it more enjoyably overwhelming. The sublime overload inculcates users with passivity toward their own curiosity. The procedures that pretend to manage the overload instead direct the users’ surrendered attention toward ads. With their lowered resistance and learned helplessness, they should be more easily persuaded than ever.

Both information overload and context collapse are deliberately induced — they are features masquerading as bugs. Both help us enjoy a more passive attitude toward consuming social media, offering plausible deniability to ourselves when we see the ship of active engagement has sailed.


Contortions of self-consciousness


In his book Sour Grapes, Jon Elster has a chapter about “willing what cannot be willed,” or what he also calls “states that are essentially by-products.” He offers the example of spontaneity: you cannot try to be spontaneous; you can only recognize that you had been acting spontaneously after the fact.

“When we observe that some such state is in fact present,” Elster notes, “it is tempting to explain it as the result of action designed to bring it about — even though it is rather a sign that no such action was undertaken.” This Elster calls the “intellectual fallacy of by-products,” which presumably leads to a belief that we can reverse-engineer the pleasure we take in certain conditions that can’t otherwise be pursued directly. It suggests, too, that we mistake observation of an emotional state as the ability to also identify its cause — noticing my spontaneity made me spontaneous, so I should just think about being spontaneous more!

Reading about ASMR, as in this article about Buzzfeed’s Facebook Live show ASMR News Now, made me think of this fallacy, and how ASMR seems to hinge on defying the idea that you can’t manufacture inexplicable pleasures. ASMR is usually explained as a kind of brain tingle brought on by sounds that conjure intimacy and monotony in equal measure: “soft voices, kind words, a conceit of caregiving,” as Nitin Ahuja explains it in this essay. The sensation seems to steal upon those who experience it, yet it apparently can be triggered reliably by ASMR practitioners who can slur their sibilants in the right rhythm while performing some mundane activity chosen for its unobtrusiveness, its lack of capacity to bear deeper meaning. The ASMR practitioner often performs concentration — through such routines as folding towels, say — so that listeners can let their own need to concentrate dissolve.

The typical ASMR scenario thus seems to stage meditative conundrums of concentrating on not concentrating, dramatizing how the care we often yearn for must be both an expression of special attention and of being taken for granted. It’s about using technological mediation to will an unwillable state, to make our approach to a desirable “by-product” state suitably indirect. The frisson of ASMR is thwarting the principle that you can’t tickle yourself, you can’t plan to give yourself goosebumps. ASMR says you can.

ASMR suggests there is a way out of the contortions of self-consciousness that come from trying to be natural. Elster cites Stendhal’s diary on the recursive desire to act natural (think of him as the original Mr. B Natural) and claims Stendhal “turned to fiction” as a “way of enacting his desire by proxy.”

I wonder if we sometimes hope that our social-media profiles could function in a similar way, allowing us to actively experience what happens to that profile a kind of radical passivity that passes for “naturalness.” Our data gets processed and what we really want to know or how we really want to be is presented to us as not an artifact of our consciousness, of our deliberate consideration, but instead somehow implicit in our past activities.

This desire to have our “real selves” captured behind our backs and revealed to us becomes an alibi for permitting extensive surveillance of the self, for embracing the “inevitability” of surveillance as a prerequisite to self-knowledge. Finally surveillance will let us chart the path to “being natural” without immediately feeling unnatural about it. Inherent in this is our ability to take for granted that “naturalness” is less a state of being than a commodity, and like other emotional commodities, is available on demand by consuming the appropriate goods. When I want to feel “authentic,” I can look at a list of books Amazon recommends for me and simultaneously delight in how well my data pegs me and in how much of me escapes Amazon’s understanding.

Stendhal, Elster notes, didn’t try to “make an impression on others by faking qualities that he does not have.” Instead, he wanted to become “a person who could not care less about making an impression.” One of the seductive things about surveillance is that you know you are making an impression — as so much data —regardless of whatever effort you make or don’t make. You don’t have to try; algorithms will impute intentionality to your behavior without your having to taint it with your own willfulness. The behavior can seemingly remain pure.

Rather than anticipate being watched and feel pressure to perform perpetually, for an unknowable audience whose unknown demands can only open an irresolvable anxiety, one can take the opposite approach, viewing “total” surveillance as effectively the same as no surveillance — as the freedom from having to perform the self for a specific audience because all audiences are possible.

You can trick yourself into thinking that the effort to be natural has become superfluous, and your “naturalness” will be constructed for you from that data for your later consumption. Naturalness, authenticity, realness, and spontaneity (and any other terms for presence qua presence) are all retrospective artifacts; they are all manifestations of nostalgia.

Elster quotes Stendhal as declaring that “it is very difficult to describe from memory what was natural in your behavior;  it is easier to evoke what was artificial or affected since the effort needed to put on an act also engraves it in memory.” This is posited as problematic, as the “faked” aspects of behavior are presumed to blot out what was “genuine” about it. All memories are false memories. We never remember how we really were. Such thinking can produce the life-logging impulse: record everything about my life because I can’t trust what I think I know about my past. But this merely raises the distortions of memory to the next power: one misremembers in greater detail what the life logs cause one to relive.

That problem is solved by having the life logs kept by outside parties — data brokers — who devise a variety of persuasive ways to present that past self as the future. The real person you were that you can’t quite remember turns into the person you are being guided into becoming.

All this becomes absurd and irrelevant if we treat affectations not as masks concealing a true self but as the process of that self being brought into existence. What is natural in your behavior, in Stendhal’s sense, is not worth knowing. It’s a void that makes us susceptible to anyone who promises to fill it, even when they lower their voice to a slurpy ASMR whisper.

Reacting to Reactions



For this talk, I am going to use Facebook’s recent design change to its like button — we used to “like” things on Facebook; now we are permitted one of six “reactions”— as a way of getting at some larger points about identity construction on social media as a form of labor, and the role the idea of “authenticity” plays in extracting that labor.

From Likes to Reactions

reacting to reactions 1

A week or so ago, Facebook rolled out a much-anticipated update to its interface that expands the ways users can respond to other content within Facebook. Before, you could comment on something, or if that was too much trouble, you could simply like it with a click. Or, of course, you could scroll past it with no response, a passive choice that Facebook nonetheless tracks.

Now, however, users have an intermediate option between comment and like: When you hover over the like button or hold it down with your finger, you’re invited to further specify your intended reaction from a menu of six: Like, Love, Haha, Wow, Sad, and Angry. A seventh option called “yay” was killed late in testing because its intended meaning was perceived as “too vague.”

Facebook reportedly settled on these five added reactions after surveying what the most common one-word comments on newsfeed posts were, and after consulting with social scientists about how to accurately taxonomize the “range of human emotion.” So the company’s methodology was essentially reductive rather than generative from the outset: its assumption is that humans experience a limited set of emotions and Facebook need only reflect that with a concise emoji set. This is unlike actual emoji, of which there are more and more being developed, and which can be combined or deployed to express new shades of emotion that there aren’t even words for.


The Official Position

reacting to reactions final final 2

Facebook describes Reactions as giving users what they’ve been asking for. Here’s what the company said in announcing the change: “We’ve been listening to people and know that there should be more ways to easily and quickly express how something you see in News Feed makes you feel.”

That sounds almost benevolent. But it’s worth examining that statement more closely.

With Reactions, Facebook seems to be imagining a pretty straightforward process: A user sees a piece of content; experiences a specific, discrete emotion that they can clearly identify; and then they “quickly and easily” choose that particular reaction from the menu of clickable buttons.

But every step of that process is riddled with complication. Nearly every word in that sentence warrants further consideration. The “people” they are listening to are not just users but other advisers and researchers. The “more ways” to react is actually a limited set, premised on the notion that users would rather click a button than use language to express their feelings. And one’s feelings about some piece of content are typically a mixture that one may not be able to sort out: Maybe jealousy is mixed with congratulations; joy mixed with anxiety; a sense of discovery mixed with a sense of shame. The design of Facebook’s Reactions repudiates the possibility of such ambivalence, suggesting mixed feelings are abnormal, atypical. It presumes we have an immediate, precise response.

As several commentators have pointed out, the new Reactions feel more constricting and prescriptive than the Like button ever did. A Like, when it was the uniform currency of attention in Facebook, had a certain ambiguity: It could be spent on anything. But the greater precision of these Reactions says you can spend your attention in only six ways.

Accessing the Reactions menu does not make using Facebook easier or quicker, but more cumbersome. Rather than the binary process of saying yes or no to “liking” content, users now have a two-step process in which they decide to “react” and then pick a reaction. Then they have to get to the menu itself — a few seconds, but an eternity by Facebook’s own standards of time management. After all, this is a company that rolled out Instant Articles because it believes a few seconds is too long for users to wait for content.


Reactions and Decisions

reacting to reactions final final 3

So why is Facebook willing to risk slowing its users down with Reactions? Why is the company undermining its own beliefs about our implacable impatience?

Part of it is because finding out who will work those extra milliseconds to react is useful demographic information. Since the Reactions menu doesn’t appear right away, users happy with the old way of liking can basically ignore them. This lets Facebook learn what sort of people are willing to invest more attention to what they are seeing in their News Feed, who thinks more about it regardless of whether they react. And, more obviously, Facebook gets more precise data about what emotional labels its users apply to what content, and how all that correlates to other user behavior, refining the company’s portrait of users’ interests and susceptibilities. For that database, whether the label corresponds to any actual feelings — whether users are actually angry about what they label as angry — doesn’t really matter. It just connects patterns of responses.

Adding the extra options basically allows Facebook to extract more information about all users and more labeling work from some of them. But beyond being a means for extracting more labor, Reactions, at the conceptual level, provides users with a kind of comfort zone for emotionality while the are on Facebook. The menu of affective possibilities suggests that responding to content in Facebook floats somewhere between a “reaction” and a decision. The specific, limited choices on the Reaction menu don’t really need to reflect users’ pre-existing responses — it works more like a poll that produces a reaction at our point of contact with the interface.

But the idea that with Reactions we are clarifying our “real response” lets us cling to the illusion that our “real responses” are rational and easily communicated in the first place. Reactions tries to sustain the impression that your emotions are deliberately chosen without their becoming a matter of strategic calculation or emotional manipulation. You are not simply broadcasting what you think you are supposed to feel, but genuinely responding.

In that extra moment of consideration that Reactions require, a useful space of fantasy opens up in which we can believe that the News Feed inspires us with a series of real feelings, when in fact we are suspending whatever complicated feelings we might have for the relief of just clicking a button. The fantasy is that we can experience total emotional control in the face of other people’s lives. Having predefined reactions helps keep users from overreacting, from acting out, from lashing out.

Reactions offer the promise that Facebook is a safe place where we won’t ever be overwhelmed by unwanted feeling. This way, we can play the game of guessing what reaction someone was hoping for without feeling entirely cynical. It is reassuring to believe there are right answers, right ways to feel.


The Purpose of Like

reacting to reactions final final 4

It used to be that Like was always the right answer. But with Reactions, Facebook is attempting to balance the Like button’s original mission, which was to radically simplify and amplify users’ impulse to respond, with the expectation of its clients — the advertisers — that it gather more comprehensive data about how users feel.

The Like was initially removed friction from online interaction. It normalized the idea that generic, binary responses could stand in for other forms of reciprocity. The quickness trumps accuracy or further elaboration of the potential complexity of those responses.

The point is not to allow you to express yourself or your “true feelings,” if there even are such things. In fact, the genius of the Like button is in that it absolves users of the need to have true or complex feelings. Instead it supplies an automated, uniform positivity that attempts to preclude hostility or disliking, or any sort of intense, considered emotional engagement that might interrupt a user’s flow on the site.

Likes thus encourage a kind of mechanistic sociality that is more akin to slot-machine gambling than to anything that risks reciprocity: You post some content, pull the handle, and see how many likes come out. Or, from the other side, you rhythmically sprinkle Likes over people’s posts like some benevolent Johnny Appleseed of social approval. The faster I can register a reaction to content, the faster I can put it behind me, and the more content I can process. I can still get into a satisfying rhythm of data processing, consuming more content and registering more and more reactions.

Reactions move a half-step away from that protocol, sacrificing some speed for the collection of more granular marketing data. The gamble is that Reactions will still save users enough of the trouble of interpersonal engagement that they won’t begrudge the extra cognitive burden.


Formatting the Self

reacting to reactions final final 5

The Like’s importance makes it plain that Facebook is not really in the business of “connecting the world” so much as formatting the world as data: It survives as a business by building marketing profiles of its users from who they connect with, where they go, what they look at, and what they like. When we consume content through Facebook, that consumption is simultaneously self-production, in data form.

All this information then guides Facebook’s algorithms in determining what ads and content gets shown in users’ news feed, and what content gets suppressed. Our standardized reactions to content are meant to make us machine-readable.

Facebook’s data-fying mission is reflected in the pleasures it gives to users: that of sociality as convenience. By giving our connections and interactions a particular, efficient shape — a predictable, routinized quality — it offers emotional deskilling as a kind of energy conservation, as the joyous opportunity to consume more content on our own terms. Facebook Reactions allow us to quickly exchange a few permissible emotions without requiring us to expend the energy required to actually believe we feel them.



reacting to reactions final final 6

By representing our personality as so many digital punch cards, the Likes and Reactions allow Facebook to generate of simulacrum of ourselves that anticipates how we might react to future situations. This can create a hermetic feedback loop, where past actions fully dictate future potentialities, foreclosing the possibility of surprise or change. That is part of the pleasure Facebook brings.

When the algorithms parse us, shaping what we see and what is served to us, they make us into a product we can ourselves consume —we become auto-cannibals. We get to enjoy how well Facebook has stereotyped us and feel known, recognized. And when it predicts poorly, we can take comfort in how that proves our ineffability, our rich complexity. It gives us an alibi for what we consume — it wasn’t my fault, the algorithm made me read that. Or we can defiantly redefine ourselves in opposition to the algorithm, letting it shape our identity in the inverse.

In any case, as the repository of so much information specific to users — both volunteered data and data collected surreptitiously — Facebook becomes increasingly central to how they conceive of their identity.


Entrepreneurial Subjectivity

reacting to reactions final final 7

This gives Facebook users all the more reason to tend to their profiles. This labor may at first be rationalized as personal expression of self-development, but social media’s saturation with metrics quickly makes the work far more entrepreneurial.

The numbers we rack up — of friends, of likes, of followers — make it clear how we can use social media to accumulate various forms of capital. These metrics provide believable proxies for your social capital (who you know, who listens to you) and cultural capital (what you like; the value of your tastes).

This work has become more and more mandatory, as it helps establish your reputation, likeability, hire-ability, social fitness, your sense of belonging.


Self-Consciousness and Self-Performance

reacting to reactions final final 8

The entrepreneurial mind-set that social media encourages brings with it a more intensified self-consciousness, a deeper awareness of how, thanks to Facebook, life is full of opportunities for strategic self-presentation. Such opportunities can begin to define the limits of what we conceive of as possible. If something doesn’t lend itself to becoming social-media content, we may not think of doing it in the first place.

Because social media supplies an ever-present audience that likes and reacts to everything, we have no sound reason not to always be performing for it. To the degree that we accept that social media is about our self-expression, this makes the self equivalent to and dependent on the impression we make on that audience. It doesn’t pre-exist these performances. On social media “realness” gets conflated with how many reactions we can generate. If we leave no impression on that audience, we have to begin questioning whether we exist at all.

Reconceiving life as a series of chances for strategic self-presentation in social media radically undermines the old idea of authenticity. “Authenticity” used to be spontaneous, disinterested feeling, not efforts to get attention. Authenticity was opposed to “selling out.” Now social media situate the self as always already “sold out,” in that self-promotion and attention-seeking have been normalized. Posting solely to get Reactions is not seen as aberrant or craven but natural. When we post content to social media, we are not necessarily trying to “be authentic” or “express ourselves” so much as we are communicating simply to see if people are listening (a point Bethlehem Shoals makes well here), to see if we are included. Luckily the algorithms are always paying attention to us.


Data Truth

reacting to reactions final final 9

Facebook’s algorithmic simulacrum of the self offers a compensation for any lost authenticity. It offers a new mode of authenticity through deeper forms of personalization: The more information you contribute, and the more of your behavior that is captured by surveillance, the more the platform can uniquely tailor an experience for you. This helps justify the way we are increasingly watched: all the surveillance reveals who we really are to ourselves.

In order to experience or inhabit that unique version of yourself, that “real identity” that your data has delineated, you have to keep using Facebook. To probe this self, we must consume and post more content, which triggers the algorithms to reveal that self’s contours. Hence, being “authentic” in social media essentially boils down to “post more,” “like more,” now “react more.” Ultimately, it doesn’t matter what you react to or what your specific reaction is: When the algorithms process it, it becomes “true to who you really are.”

The perfect reaction, the purest and most authentic reaction, has nothing to do with any content at all.

reacting to reactions final final 10