twitter
facebook twitter tumblr newsletter
blog-marginal-174
Marginal Utility
By Rob Horning
A blog about consumerism, capitalism and ideology.
rss feed

The Viral Self


I keep telling myself I should stop pursuing virality through critiques of virality. But then I’ll read something about how virality can be pursued and engineered as an end in itself — like this Wall Street Journal piece by Farhad Manjoo, or this from the Atlantic Wire about the site Viral Nova by Alex Litel — and I will be inspired to write a series of tweets about online circulation and its various measures displacing other forms of content with which to make up the self. And then I will track my Interactions page on the Twitter site to see how they did, and how I am doing.

That is to say, talk of viral content almost always makes me think of how it teaches us what it takes to engineer the self to go viral. What viral content reflects is the desire to put ourselves in broader circulation, to find irresistible tidbits of information that can be turned into trojan horses carrying a more significant piece of data: the proof of our social existence.

The thrust of the articles linked above is that aggregating viral content is a business model threatened by its own popularity. Renowned viral-content collector Neetzan Zimmerman worries his methods will be adopted by competitors; Viral Nova proves how launching a viral-content site takes few employees and little capital, thus threatening the potential of viral sites in general as an investment.

Litel writes:

Though ViralNova is the synthesis of a self-made millionaire’s years of experience in SEO-driven content, it also represents the volatility of internet-oriented media—someone without venture capital, publicists, or big-name journalists, effectively built their own immensely successful version of BuzzFeed or Upworthy. As much as those sites might market their proprietary technology and processes, ViralNova suggests it can be reverse engineered fairly quickly by anyone with a careful eye for emulation — which is to say everyone on the Internet. 

In other words, everybody can turn their own online presence into a viral-content startup.

The fact that virality can be “reverse engineered” without fear of shortages of viral-worthy content is interesting enough. “Amazing” and “heart-warming” or “surprising” content is a matter of form, not extraordinary incident. These words trigger likes the way old novels triggered tears — you didn’t want to seem unfeeling so you did it. But the fact that “everyone on the Internet” has become so good at “emulation” suggest the appeal of viral content is in the model it provides for self-memeification. Are we all starting to premise our self-worth on being as viral as Neetzan Zimmerman’s content? Is the pursuit of virality becoming hegemonic, as online “engagement” metrics that track viral content are taken also for reliable measures of self-esteem?

The point of viral content, in part, is not to learn about “little girls in Afghanistan who are better at skateboarding than you’ll ever be” or other such stories (which often turn out to be untrue) but to be the person who responds correctly to them and who tells someone else about them. The function of viral content is to permit vicarious participation in the emotions of the story, and vicarious participation in the social. The perceived virality, popularity, of the content, illusory or not, elicits a richer emotional response in the consumer of the content. Virality may function as disinhibition for a reader, authorizing fantasy and emotional investment, a suspension of disbelief that is sustained by apparent social support. Everyone is talking about this! In that sense it is “real” regardless of whether the details are accurate. The circulation of the story makes it a social fact.

This is nothing new; it is how gossip and tabloids have always worked. What is new is the veneer of statistical measurement behind labeling something viral that makes its allure and efficacy as vicarious prompt — for the time being, anyway — stronger. You can see how the story is circulating and participate and consume it at that level. You can experience virality as an emotion.

Once everyone knows about Upworthy and can source viral material from it themselves, though, its thrill is gone. Virality settles into traditional mass-media reach. And Facebook’s engineers, whose algorithms underlie virality in practice, retool how their site’s newsfeed works, as Ezra Klein explains here, to thwart overpopular or overliked content. And so new viral-content providers must be uncovered, new ruses to evade filters and stoke consumers’ vanity devised. Viral content sites themselves have a viral life span.

But the “viral self” remains an aspiration, even when disparate viral-content sites flame out. Social media supply the infrastructure for a kind of everyday epidemiology of oneself, of one’s social infectiousness. This can become the self’s purpose, its anchor, its way of confirming itself to itself. When the viral self is off the timelines, out of the newsfeeds, making no push notifications happen for anyone, it is thrown into an existential crisis, seeking for new bait to inject into the network.

The viral self also knows itself in how it signals certain attachments and commitments through re-sharing. (In his post, Klein highlights the sort of peer pressure stoked by manipulative “viral” headlines: “Are you such a jerk that you won’t take a moment to see the heartbreaking reason this mother had to abandon her dying baby? … what does it say to your friend who shared it if you pass over this post in order to like something about cupcakes?”) Social media in general give a stage on which to perform this sort of reader reception, which intensifies the experience and stakes of reading — or simply liking, which allows one to participate in the excitement of a story without bothering with reading beyond the headline. I know my reaction to something I am reading can be performed on Twitter, so I am sure to have a reaction, to method-act my response and see how it goes over. That is an added bonus I get from consuming content online, that I don’t get from perusing Life & Style in the grocery-store line. That performance can then circulate and substantiate me, as well as provide the immediate pleasure of vicarious involvement with the story and with the crowd I imagine all responding together.

Engineering virality thus threatens to serve as a moral practice in its own right. Being able to play upon various emotional triggers to generate virality and respond appropriately to them subordinates those emotions to a greater good, whose goodness is established by the feeling’s mobility, its trackable transferability — it’s less a matter of something being moving than the fact that it moves. The emotions that viral content provoke almost immediately become pretexts for establishing a point of contact with an audience, and that feeling of spreading connection serves as the master emotion, the root of “authentic” feeling. Having feelings is pointless if you can’t be seen having them. That was true in the late-18th-century heyday of sensibility spurred by the emergence of novels as a social medium; it’s true now with online social media, in amplified form. Having feelings is pointless if your performance of them is not as viral as the occasion that prompted them.

Virality thereby becomes the horizon beneath which occurrences no longer figure socially, no longer count for anchoring identity or asserting a self. If a retold experience doesn’t continue to circulate, the experience and the original retelling of it amount to nothing. They are not even false; they simply don’t matter.

Just as genuineness  has proved irrelevant to viral content, it is also irrelevant to the viral self. The viral self is “postauthentic” in that it finds the truth of itself in ex post facto metrics rather than fidelity to some pre-existing ethic or value system. Its “authenticity” is an after-effect of having marshaled an audience that values the content it circulates. Being true to some unchanging interior spirit, being consistent despite the demands of an audience watching — these are not such relevant concerns anymore.

I think viral self works better than “microcelebrity” or “microfame” as a way to describe what people are doing on social media in trying to garner likes, followers, reblogs, and so on. I have never really liked the implications of those terms, which suggest a pursuit of some excess above ordinary social life, as if the pursuit of microfame expressed a dissatisfaction with one’s appropriate and “natural” level of social attention. What social media precisely do is unsettle our bearings for figuring out what a normal or “appropriate” amount of attention is supposed to be.

Virality, unlike celebrity, isn’t about exclusivity or personal talent; it’s about moving information continually. Wanting to go viral is not the same as wanting to become famous. Whereas a famous person has become a someone, a viral self is always in process of becoming, always proving itself. But it needs only to be circulating; it doesn’t need to climb.

Social media don’t facilitate the pursuit of fame any more than any other form of media does. Fame is still reserved for the few. Still, the architecture of social media normalizes making “engagement” the unit of social recognition, just as it is for advertising efficacy. (After all, in a consumer society, we aspire to be as popular as the products we are expected to crave.)  The ubiquity of virality makes it seem as though one can fit in only by spreading oneself indiscriminately. But what one might be trying to fit into is amorphous, all-encompassing — less a place or a community than a statistical curve, a limit approached asymptotically.

Social media sustain a measurement system that makes “more attention” seem always appropriate and anything less insufficient. If you are not growing your online presence, if your content is not circulating ever more widely, then you are failing. You are disappearing. You are not only not “microfamous”; you are not socially relevant. You are on the fringe, in danger of total exclusion. You are adding nothing to the social bottom line. You are not inspiring anybody.

But as long as others re-share what you share, your being is secure. You are rippling throughout the network, and you can hear the reassuring echoes.

Ego depleted

Consumerism is sustained by the ideology that freedom of choice is the only relevant freedom; it implies that society has mastered scarcity and that accumulating things is the primary universal human good, that which allows us to understand and relate to the motives of others. We are bound together by our collective materialism.

Choosing among things, in a consumer society, is what allows us to feel autonomous (no one tells us how we must spend our money) and express, or even discover, our unique individuality — which is proposed as the purpose of life. If we can experience ourselves as original, our lives will not have been spent in vain. We will have brought something new to human history; we will have been meaningful. (This is opposed to older notions of being “true” to one’s station or to God’s plan.)

The quest for originality collides with the capitalist economic imperative of growth. The belief that more is better carries over to the personal ethical sphere, so that making more choices seems to mean a more attenuated, bigger, more successful self. The more choices we can make and broadcast to others, the more of a recognized identity we have. Originality can be regarded as a question of claiming more things to link to ourselves and combining them in unlikely configurations.

If we believe this, then it seems like good policy to maximize the opportunities to make consumer choices for as many people as possible. This will give more people a sense of autonomy, social recognition, and personal meaning. Considering the amount of time and space devoted to retail in the U.S., it seems as though we are implementing this ideology collectively. The public-policy goals become higher incomes, more stores, and reliable media through which to display personal consumption. This supposedly yields a population that is fulfilling its dreams of self-actualization.

But when you add the possibility of ego depletion — the loss of well-being due to overtaxing the executive decision-making function of the mind; it’s explained in this 2011 New York Times piece by John Tierney on “decision fatigue” — to this version of identity, it no longer coheres. Trying to grow the self through exercising market choice simultaneously generates a scarcity of “ego” resources, which are depleted by this sort of reflexive approach to performing the self as a rational decision-maker above all. “When you shop till you drop, your willpower drops, too,” Tierney writes. The choices become progressively less rational, less representational, less “original,” and more prone to being automatic or being manipulated by outside interests, thus ceasing to be emblematic of the “true self.” Instead of elaborating a more coherent self through a series of decisions, one establishes an increasingly incoherent and disunified self that is increasingly unpredictable and illegible to others. We lose the energy to think about who we are and act accordingly, and we begin acting efficiently instead, with increasingly less interest in coherence, justice, consistency, morality, and so on. We want to make the “convenient” choices rather than the ethical ones, the ones that we believe reflect the truth about us.

This represents a serious threat to economic models hinging on rational actors and “revealed preference,” as behavioral economics has attempted to demonstrate. It underwrites the business model of nickel-and-diming consumers into submission, as airlines have recently taken to doing. But it also muddles the self that has been fostered by consumerism. It suggests that consumerism is a control strategy based on exhaustion, not fulfillment. If ego depletion leads to impulsivity, one can see how overloading individuals with opportunities to choose becomes a deliberate strategy to encourage exhaustion and render people easier to control. As decision fatigue sets in, morality and personal idiosyncrasies are overridden by the underlying desire for conservative efficiency, the nature of which can be anticipated.

The more options to optimize our experience that we are confronted with, the less resistance we can mount and the more likely it is we can be brought to the decision that companies want us to reach. Tierney points out that “decision fatigue leaves you vulnerable to marketers who know how to time their sales, as Jonathan Levav, the Stanford professor, demonstrated in experiments involving tailored suits and new cars.” Maximizing choices, then, doesn’t foster autonomy and creativity in self-realization; it does the opposite, reducing people to more or less uniform impulses. Complexity, elaborate customization possibilities, are a strategy for controlling people, not for giving them the opportunity to mirror their uniqueness in a particular commodity. Customization is a mode of control rather than liberation.

Advertising from this perspective is less useful information and more a series of temptations that constitute an assault to the integrity of the self, which depends on conserving the choices it makes to remain “authentic.” The threat of ego depletion also makes invitations to “share” the self and engage in social media into threats to that self as well, as long as it is understood to be a sum of communicative choices and not the ongoing process of confronting them. The endless opportunities social media afford for users to interact end up depleting the self (as it was once understood), resulting in the curious situation where the more one uses social media, the more desubjectified one becomes. The more you share, the less rational selfhood you possess.

Arguably, the “authentic self” consists precisely in these efficient choices we make when we are too tired to strategize about what we are choosing. As in a productivity gurus’s dream, efficiency for its own sake would be the very basis of the real. But efficiency seems more like a nonidentity, what remains when subjectivity can’t be achieved. So the pursuit of convenience, too, is desubjectifying in this way; it removes the friction in our choices and deprives them of their identity-making basis. The more we crave convenience, the more we long to be no one in particular.

 

 ***

Ego depletion is a close conceptual cousin of the attention economy, which similarly presumes that attentional focus is a cognitive resource subject to scarcity and demanding economic management. Being overwhelmed with stimuli can become a mechanism of control as well. This is the idea behind  Franco Berardi’s theory of “cognitarian subjectivation,” which, by forcing us to turn our emotional experience into productive labor, threatens to overwhelm the organic limits on our ability to perceive. ”Today it is the social brain that is assaulted by an overwhelming supply of attention-demanding goods,” he argues. “The social factory has become the factory of unhappiness: the assembly line of networked production is directly exploiting the emotional energy of the cognitive class.” The economy is being organized to maximally harness the energy people spend making consumerist choices to create identity within consumerism’s code and thereby strengthen that code. The mode of exploitation is oversaturation.

In the future, we’ll have an economy based on the labor of sociality in social media networks that are subsumed by capital: that is, we’ll fight for attention on Facebook, etc., and that effort will be harvestable as data by the firms that own the networks, who will sell us tools derived from that data to abet our struggle for more attention.

As we articulate our identities within attention-depleting media, recognition increasingly becomes a zero-sum game; one’s recognized identity comes at the expense of another’s in that it steals attention away. Identity becomes competitive in these forums, further destabilizing it. The problem worsens as this recognition becomes not a mere matter of ontological security but economic viability, as digital labor (personal brand building, etc.) becomes a required prerequisite for other work, or the only kind of (precarious) work available. Karen Gregory, in a post about “hyperemployment,” points out that  ”the overattachment to digital devices … can be seen as learned behavior emerging from a poorly controlled Milgram experiment in which we are both the ones shocked by the persistent buzzing our devices (‘opportunity’ calling) and the ones doing the shocking, giving in to invisible structures of authority that mark the evolving, ever increasingly digitally mediated labor landscape.” This leads to an “accompanying ‘administration’ of one’s life that takes the form of an endless to-do list.”

Fighting for recognition is also a kind of self-care-work, another aspect of the endless to-do list — unacknowledged affective labor that becomes more burdensome rather than less with the proliferation of forums ostensibly intended to help with it. In the name of efficiency, Social media tend to individuate the collective work necessary for reproducing the social — for maintaining the connections and relations of care that make life livable. But this supposed efficiency makes the workload even more unmanageable and distributes it more unfairly even as it multiplies the work that seems to be necessary. In place of solidarity, social media prompt users to compete over attention, divvy it up rather than share the responsibility for replenishing its store or easing the demands that deplete it. Social media can serve as an individualized accounting system for socially reproductive labor that encourages economizing efforts to shirk it. Social media turn sociality, a potentially replenishing respite, into a series of depleting decisions about how to manage the interaction.

And we compete not only with one another, but also with consumer products that are also contrived to soak attention that co-exist in the same space as crypto agents. (Candy Crush Saga is stealing your identity in more ways than one.) As Zygmunt Bauman argues in Consuming Life, we make ourselves into commodities to complete for attention in a consumer society, in which recognition is parceled out chiefly to commodities and evaluative criteria are derived from consumer markets. This makes broadcasting a “personal brand” self in social media appear like a better bet than looking for recognition outside it, in terms not dictated by the platforms that facilitate and automate interpersonal acknowledgment (likes, etc.). But as the engagement in social media leads to ego depletion, the pursuit of recognition becomes a dizzying race toward desubjectification — more decisions, more attention “spent,” more exhaustion, more autopilot. Berardi puts it this way: “Acceleration leads to an impoverishment of experience. More information, less meaning. More information, less pleasure.”

Information coming in at an overwhelming rate kills pleasure. It turns pleasure into processing, which leads directly to the “machine zone” that Natasha Dow Schüll associates with video gambling. This is why efforts to accelerate consumption should always be regarded with suspicion — these do not help us achieve more; they revert us to efficiency-seeking drones. The remedy for the corrosive effects of convenience cannot be simply searching for something even more convenient.

The human’s capacity for rational choice, such as it is, is an alibi for capitalism precisely because it is the facet of humanity it strives to overcome. “Semiocapitalism,” to use Berardi’s term, has discovered that the best way to defeat rational choice is to solicit it as much as possible. Social media are on the same track.

 

Games of Truth

Foucault’s last two lecture series at the Collège de France, in 1982-83 and 1983-84 — published in English as The Government of Self and Others and The Courage of the Truth — offer a series of interpretations of ancient Greek texts to examine the relation of the “self” to public truth-telling. What did it mean to “know thyself,” as the Delphic oracle advised? What procedures guaranteed the truth of such knowledge? And why would telling the truth about the self be a precondition for having a self in the first place? Here’s how Foucault describes what he hoped to do in these lectures (poignantly, slipping into the subjective; he knew he wouldn’t get the project finished):

What I would like to recover is how truth-telling, in this ethical modality which appeared with Socrates right at the start of Western philosophy, interacted with the principle of existence as an oeuvre to be fashioned in all its possible perfection, how the care of self, which, in the Greek tradition long before Socrates, was governed by the principle of a brilliant and memorable existence, [...] was not replaced but taken up, inflected, modified, and re-elaborated by the principle of truth-telling that has to be confronted courageously, how the objective of a beautiful existence and the task of giving an account of oneself in the game of truth were combined …

The emergence of the true life in the principle and form of truth-telling (telling the truth to others and to oneself, about oneself and about others), of the true life and the game of truth-telling, is the theme, the problem that I would have liked to study [Feb. 29, 1984, lecture].

I’ve bolded the parts that jumped out at me in that passage, the ones that reminded me of social-media practice. The archive social media compiles of us could be seen as an “oeuvre to be fashioned in all its possible perfection”; it allows us to live with that ideal much more concretely in mind. Social media give us an opportunity to “confront courageously” the principles of truth-telling — how much to share, with whom, and with how much concern for our and others’ privacy — that are activated by the various platforms.

For Foucault, that aim of living a “beautiful existence” has not been understood as something that can be achieved through a passive documentation of what we’ve done — escaping reflexivity does not make life more beautiful or pure as those who make a fetish of spontaneity insist. Instead, he argues that the “beautiful existence” came to hinge on playing “games of truth” that reveal the self to itself, as courageous.

The “true life” is no longer given automatically to ordinary people as a reward for their ordinariness. We too must prove our lives are true, are real, are legitimate, to the audiences we marshal on social media. That is, we must demonstrate the productive value of our uniquely wrought subjectivity to garner social recognition; we have to build the community (that once was geographical) as an online audience and hold it together by performing for it perpetually. The truth test becomes a way to ascertain one’s own reality, to register a “true” or “real” self that exists apart from the flux of contingencies that seem to shape us in real time. A self is not a sum of content; a self is a practice.

But what is that “productive value”? What sort of performances, or “games of truth,” reveal it? How is it translated into “status”? What defines that “truth”? How does our notion of the truth about the self modulate to fit the sorts of truths social media are optimized to confirm and disseminate? When does “sharing” — adding to collective knowledge — becoming “trolling,” a zero-sum challenge over who can control what is regarded as the truth? Is there anyway to keep those concepts cleanly separated?

At the very least, these questions help reframe what is sometimes dismissed as merely narcissistic attention-seeking as instead the attempt to live the truth courageously as a perpetual provocation of others, in the mode of the ancient Cynics. The Delphic oracle told Diogenes not to “know thyself” but to “deface the currency.” What are our social media platforms asking us to do?

***



Throughout the lectures, Foucault is chiefly concerned with the concept of parrhesia, a mode of plain-speaking truth marked by provocation. It signals an individual’s willingness to tell the truth as that individual perceived it, with a minimum of rhetorical flourish, in the face of whatever customary, tactical or ideological “truths” might be circulating at a given moment and whatever force might be deployed to suppress dissenting views. Because it is already supposed to be direct, “natural” and without figure, parrhesia’s truth is not measured in terms of its clarity or fidelity to what it represents. Instead, parrhesia indexes the truth content in an utterance to the risk incurred in speaking it. In the following passage, from the February 1, 1984, lecture, Foucault contrasts the parrhesiast with the “technician” or teacher, for whom truth is indexed to “filiation,” the way speaking can build a relationship through shared knowledge:

We have seen that the parrhesiast, to the contrary, takes a risk. He risks the relationship he has with the person to whom he speaks. And in speaking the truth, far from establishing this positive bond of shared knowledge, heritage, filiation, gratitude, or friendship, he may instead provoke the other’s anger, antagonize an enemy, he may arouse the hostility of the city, or, if he is speaking the truth to a bad and tyrannical sovereign, he may provoke vengeance and punishment. And he may go so far as to risk his life, since he may pay with his life for the truth he has told. Whereas, in the case of the technician’s truth-telling, teaching ensures the survival of knowledge, the person who practices parrhesia risks death. The technician’s and teacher’s truth-telling brings together and binds; the parrhesiast’s truth-telling risks hostility, war, hatred, and death. And if the parrhesiast’s truth may unite and reconcile, when it is accepted and the other person agrees to the pact and plays the game of parrhesia, this is only after it has opened up an essential, fundamental, and structurally necessary moment of the possibility of hatred and a rupture.

Truth, if parrhesia is the reigning modality for it, is a matter of breaking relationships, not building them. That power, that risk, marks its truth as authentic. The speech that builds relationships is merely practical or performative. The merely performative, Foucault suggests, does not constitute the self so much as reaffirm pre-existing status — the ability to make a performance and have it be understood by the audience as such. Parrhesia, however, “is a way of opening up this risk linked to truth-telling by, as it were, constituting oneself as the partner of oneself when one speaks, by binding oneself to the statement of the truth and to the act of stating the truth.”

That seems like a hypernuanced way of saying that parrhesiastic discourse makes a claim to having a self, a self fashioned through committing to certain practices rather than one simply given by the social role one adopts. Since parrhesiac statements must be loaded with enough affect to make auditors potentially kill over them, one must have some extremely intense material to talk about in order to become a self through these practices. Cultivating a self means cultivating confrontational or controversial things to say, and access to audiences that will be startled or affronted by them. One must seek out “explosive truth” about the world or about others, or contrive situations to be able to manufacture such potent truths. In more contemporary terms, one would have to create “drama,” which gives people a chance to speak “risky” truths and thereby substantiate themselves and develop status, rather than merely drawing on pre-existing status to make public “performances.”

Performance reinforces the sanctity of the truth-telling scene as it has already been set up socially, and all the roles established and agreed upon by all the actors. Authority is pre-distributed and reinscribed by what is said by the performers. “Performativity,” then, is about preserving the status quo rather than challenging it. Though it can sometimes seem that revealing the “natural order” as a set of performances will undermine that order, the fact that performances get “naturalized” isn’t the source of all their power. It may be that when everyone knows certain roles are being enacted, it makes a performance more binding, more coercive, harder to conceive of breaking.

Parrhesia attempts to undo those roles in part through the force of exposure; one makes statements meant to reveal how people “really” are and recast their adoption of roles as a species of hypocrisy. Parrhesia bases its claim to truth in trying to point behind the curtain and thus throw the roles into confusion, reveal how they are masks or how people are playing multiple roles at once. (Erving Goffman describes several tactics for this disruption of “front region control” in The Presentation of Self in Everyday Life.)

Exposing the “inauthenticity” of the powerful doesn’t automatically nullify their power, but the force of parrhesia depends on asserting that it should. Such discourse propounds that it is better, more truthful, to see past the performativity that sustains the existing order, even if what lies beyond it is the void. In the January 12, 1983, lecture, Foucault says,

In a performative utterance, the given elements of the situation are such that when the utterance is made, the effect which follows is known and ordered in advance, it is codified, and this is precisely what constitutes the performative character of the utterance. In parrhesia, on the other hand, whatever the usual, familiar, and quasi-institutionalized character of the situation in which it is effectuated, what makes it parrhesia is that the introduction, the irruption of the true discourse determines an open situation, or rather opens the situation and makes possible effects which are, precisely, not known.

“Parrhesia does not produce a codified effect,” Foucault says. “It opens up an unspecified risk.” In contemporary terms, this might be seen as the troll’s wager.

***

 

In subsequent lectures, Foucault links parrhesia to Cynicism, an ancient mode of ur-trolling. As Foucault’s March 14, 1984, lecture discusses, Cynics liked to rub the public’s noses in humankind’s animal nature, and tried to live a true life through rejecting all social conventions in the most public way they could manage — to “live the truth” through unrelenting public self-documentation.

Cynic courage of the truth consists in getting people to condemn, reject, despise, and insult the very manifestation of what they accept, or claim to accept at the level of principles. It involves facing up to their anger when presenting them with the image of what they accept and value in thought, and at the same time reject and despise in their life … In the case of Cynic scandal — and this is what seems to me to be important and worth holding on to, isolating — one risks one’s life, not just by telling the truth, and in order to tell it, but by the very way in which one lives. In all the meanings of the word, one “exposes” one’s life. That is to say, one displays it and risks it. One risks it by displaying it; and it is because one displays it that one risks it. One exposes one’s life, not through one’s discourses, but through one’s life itself.

In other words, talk is cheap. One must live the truth, and you know that is the case if others regard you as a scandal. Foucault explains how cynical practice pushes honor and truth-telling to an extreme at which its blunt, radical honesty becomes indistinguishable from a “shameless life”: “The kunikos life is a dog’s life in that it is without modesty, shame, and human respect. It is a life which does in public, in front of everyone, what only dogs and animals dare to do, and which men usually hide.” Offering that visceral personal stake itself — opening oneself to humiliation — is key; the exposure must be intrinsically inseparable from the deed. Social media work just like that — it makes all exposures deeds and vice versa.

Social media concretizes that gaze of others necessary to live the ethical, dog’s life — the true life under watchful gaze of the scandalized masses. Foucault stresses that everyday life in all its banality must be observed and judged for the Cynic’s approach to truth to succeed.

For the Cynics, the rule of non-concealment is no longer an ideal principle of conduct, as it was for Epictetus or Seneca. It is the shaping, the staging of life in its material and everyday reality under the real gaze of others, of everyone else, or at any rate of the greatest possible number of others. The life of the Cynic is unconcealed in the sense that it is really, materially, physically public.

So all those photos of breakfast, all those check-ins, all those indiscreet selfies, have an ethical function. “There is no privacy, secret, or non-publicity in the Cynic life,” Foucault says, and that sounds pretty familiar — like a lot of the complaints about ubiquitous social media. But this is not “performance,” in Foucault’s terms, but parrhesia. Systematically mistaking it for mere performance oversimplifies what is happening.

Typically social media use is seen as potential identity or reputation construction, narrating cultural capital into existence, but “exposing” life is not always the same as making an identity, in the sense of building a reputation. It can also be a way to subordinate or even sacrifice reputation for truth: Sharing can be simply volunteering the self for ridicule, purging, nullification, ritual flaying — self-branding of a different kind. It’s why people sign up for demeaning reality TV shows, as Wayne Koestenbaum suggests in Humiliation. It’s part of why we sign up for Facebook. Moments of humiliation, Koestenbaum notes, “may be execrable and unendurable” but are also “genuine” in a “world that seems increasingly filled with fakeness.” Social media neatly increase that feeling of the world’s phoniness while providing a means for the sort of self-exposure that combats it. As more behavior seems inauthentic and “performative,” we have greater need to expose ourselves and have our own authenticity vindicated through the embarrassment this causes us.

This can be seen as a fulfillment of the Cynic’s injunction to “alter the currency.” Foucault emphasizes the ambiguity and open-ended potential of this ongoing, proto-Nietzschean demand to “revalue” value, including the purpose and value of attention. Social media use is a way of continually modulating attention’s value according to whether it feels more sustaining to spend it or be ravished by it, affirmed by it. One can gain illusory control over whether one is the subject granting attention or the object receiving it, when in fact we are always both — never more so than within social media. ”Within the accepted humiliation,” Foucault claims, “one is able to turn the situation around, as it were, and take back control of it”

Social media, by offering the dog’s life, afford a quick, straightforward route to integrity, at least to how the cynics saw integrity. Foucault says that “the Cynic dramatization of the unconcealed life therefore turns out to be the strict, simple, and, in a sense, crudest possible application of the principle that one should live without having to blush at what one does, living consequently in full view of others and guaranteed by their presence.” That is reminiscent of Mark Zuckerberg’s comments about integrity, or Google CEO Eric Schmidt’s comment that “if you have nothing to hide, you have nothing to be afraid of.” Using proprietary tech platforms to conduct your ascetic radical disclosure and unveil your latent integrity also happens to be highly lucrative for tech companies.

 

***

In its determination to uncloak life and expose truth, parrhesia may also be likened to transgressive art practice, another tried and true mode of trolling. In the February 29, 1984, lecture, Foucault connects performance to “the consensus of culture” and parrhesia to art:

The consensus of culture has to be opposed by the courage of art in its barbaric truth. Modern art is Cynicism in culture; the cynicism of culture turned against itself. And if this is not just in art, in the modern world, in our world, it is especially in art that the most intense forms of a truth-telling with the courage to take the risk of offending are concentrated.

That is almost a truism, that artists “take risks” and tell untoward truths that ordinary culture refuses to express or tries to conceal. Artists become society’s conscience. Whether or not that’s true, social media, by democratizing parrhesia, democratize the opportunity to conceive of oneself as an artist in that way, someone whose life itself is a critical practice and an expression of beautiful truth.

But do social-media platforms, by making audiences more readily available and making some forms of self-documentation more automatic, strip out the intentionality that makes such self-witnessing critical, that makes it constitutive of a courageous self? That intentionality once marked conceptual artists (or performance artists or any proto-hipster living artist as a lifestyle) as transgressives who saw truth as risk. How much risk is left in it when it is automatic, ubiquitous? The platforms may merely capitalize on that frisson of courage and risk to make using social media more compulsive while containing parrhesia’s subversive potential.

Not everyone using social media, obviously, is a Cynic or a confrontational artist (at least not all the time). Most of online communication is conventionally performative (reiterating pre-existing status and stable emotional bonds — routinely liking the status updates of friends and “keeping in touch,” etc.) if not phatic (simply announcing one’s existence, in a kind of mic check). I don’t think people using Facebook risk very much — not enough to keep them from using the site to try to help manage their general anxiety about social inclusion. But I also think users do believe at some level that they are transforming their lives into an “artwork” worthy of an audience by using social media, and that the platforms encourage them to believe they can and should systematically enlarge the audience that their “brilliant and memorable existence” is appropriate for. At first, friends. Then “Friends” — people you know only in social-media networks. Then, anyone with enough Friends in common.

In this way, Facebook use begins to bleed over into the sort of social-media interaction that is more unpredictable in the ways that Foucault is outlining with respect to parrhesia, and thus more compulsive, more addictive. Whereas performative discourse takes the self as static and the exchanges it generates as predictable, socially scripted, parrhesia puts the self into play, makes it a stake in an unpredictable game, makes it growable. Parrhesia underwrites the slot-machine-like aspects of seeking unanticipated microaffirmation through social media, of trying for  jackpot “virality” that suddenly swells the self by broadening its circulation in the network. The hoard of inner experience — turned into signifers of the self in the social-media system — then takes on more weight, feels more substantial for the duration of that viral flare. One’s online archive,the whole of one’s timeline, suddenly seems relevant, in play. It might even seem more true, a prelude to destiny. (I wrote a lot more about the homology between social media to machine gambling in this post.)

Since parrehesia is where the compulsion is, social media platforms may be engineered to simulate it: They can be designed to stimulate drama and confrontation (Twitter fights, flame wars — remember those? — and context collapses, etc.) as well as the routine “performative” grooming of established bonds. Often these confrontations play on the status asymmetry of the parties involved; social media, by bringing people of varying status together in the same discursive space, set the stage for “games of truth.” Foucault argues that for parrhesia to be possible, there needs to be a tyrant who has power over you that you are addressing to establish the stakes, the danger. By taking down that tyrant in a public forum — and social media have become well-suited to this — one secures one’s own relationship to “truth”; one wins the prize of authenticity.

 

***

The phrase “game of truth” points to the idea that truth and authenticity don’t simply exist but are made. Parrhesia is not about expressing any kind of “objective” truth at all: “The statement of the truth,” Foucault notes, “does not open up any risk if you envisage it only as an element in a demonstrative procedure.” Instead, it hinges on offending or troubling higher-status others in their sense of who they are: “the person who tells the truth throws the truth in the face of his interlocutor, a truth which is so violent, so abrupt, and said in such a peremptory and definitive way that the person facing him can only fall silent, or choke with fury, or change to a different register.” In other words, parrhesia can be seen as a fancy word for privilege shaming. It’s basically “speaking truth to power” — with power being not merely a matter of the explicit power to dominate over others but also the power to constitute oneself in a publicly credible way, the power of habitus, the power to make and control the knowledge about oneself rather that being subject to others’ determination, as mere information. The “truth” here is not a litany of facts but a value claim for the self. To claim an ethos of your own, it requires rhetorical combat.

The resulting confrontation can be zero-sum: Autonomy over one’s identity, in the “game of parrhesia,” can seem to come at the expense of the person you confront.  The truth-teller gains a self and agency over it that is measurable only in terms of the person addressed’s loss of security. There is no other place where it is legible — no other reference point for measuring achieved “self.” (It is not discovered within a person.) You know you have succeeded in telling truths (and gained status or ontological stability) only if the other is discomfited, seems thrown into confusion. This is the point of trolling up.

Invoking the very existence of a “true self” is a tactic that lower-status people can use to force higher-status people into an truth game with them — or at least marshal an audience that will put them on the same playing field by default. So when people (like me, sometimes) argue that social-media use encourages us to be calculating and inauthentically reflexive, this argument is itself an attempt at parrhesia — a status claim. I’m still real, are you?

To argue that people jeopardize their “real” self in using social media to make a personal brand is to try to stage a truth game, to interpolate people into an authenticity competition. That argument takes people’s ordinary performative discourse online and scrutinizes it as parrhesia. Thus, denying others the right to exist in different contexts, to have different social roles, is always an option available to “trolls” and other people seeking to garner a stronger sense of self. Staging a context collapse starts a truth game that the lower-status person has everything to gain by and relatively little to lose.

So parrhesia in social media yields a self moored by zero-sum games of power and delineated by measurable evidence of influence. This seemingly stable set of procedures for making a self — for playing the game of truth — are the consolation for the dismal, anxious, hyperreflexive sort of self the procedures actually yield. It may be that the only thing more intolerable than an “inauthentic” self is being at a loss for coherent procedures for “growing the self.” Of course, that anxiety can be historicized as being a reflection of neoliberalism, and of the need to be entrepreneurial about one’s personal brand to survive.

Social media provide the sort of metrics to make the game of parrhesia more playable, more creditable — they give a scoreboard for the sort of ethos that emerges from truth games — but these same metrics might also help keep parrhesia at bay by revealing status and allowing high-status people to avoid interaction with lower-status people. Accusations of inauthenticity may be simply irrelevant when cast at elites whose massive fame (or wealth) insulates them from ontological insecurity. It may be that social media facilitate not a confrontation of high against low, but of low against lower in a perpetual unfolding of petty drama in social media, to the social-media companies’ benefit, while the high-status people remain exempt, manifesting in social media mainly to perform their essential inaccessability.

The potential for parrhesia in social media is thereby circumscribed by some of the same affordances that make the parrhesia possible. Worse, the parrhesia in social media may set individuals against one another in pointless struggles for authenticity while precluding them from uniting politically to fight for shared goals against those remote elites. The satisfaction of those games, the “self” and “truth” that emerges from those compulsions, is another species of “cruel optimism,” to use Lauren Berlant’s phrase, in that it offers formal rituals that make the present tolerable or even pleasurable while altering nothing about a general condition that makes people feel overburdened, depressed, precarious, excluded, humiliated. There is a pale satisfaction in making a limited truth in the moment, even if it has no effect on the distribution of power or the way one is known by society.

The taste of circulation

Back in my blogging heyday, probably circa 2008 or so, I used to run through my RSS until I read something that prompted me to start thinking my way toward some sort of proposition. Then I would start working on a post. First I would write a brief summary of what caught my interest in the article, apropos of nothing (one of the earliest comments I can remember receiving on my blog was along the lines of  ”Nice blog. But why don’t you try starting a post with some other setup besides ‘In BLAH, so-and-so wrote BLAH.’ “). And then I would start to try to articulate my response, in paragraphs that I would reorder and revise throughout the day, sometimes augmenting them with links to relevant articles I would happen on as the day progressed. It was easy to find such link, as the post-in-progress gave me a clear focus as I waded in my infostream. Eventually I would  work my way toward a culmination, or would write a sentence that I could recognize as how the post should end. Then I reshape the whole thing with that sense of the ending in mind, give it a proofread, and post it. Then I would go back to reading randomly, looking for a new inspiration to focus me.

I don’t work that way anymore. Now, when I hit upon an article that starts me thinking, I excerpt a sentence of it on Twitter and start firing off aphoristic tweets. I don’t worry about ordering my thoughts into a sequential argument, or revising my first impressions much. I don’t try to build toward a conclusion; rather I try to draw conclusions that seem to require no build-up, no particular justification to be superficially plausible. And then, more often than not, I will monitor what sort of reaction these statements get to assess their accuracy, their resonance. At best, my process of deliberation and further reading on the subject  gets replaced by immediate Twitter conversations with other people. At worst, tweeting pre-empts my doing any further thinking, since I am satisfied with merely charting the response.

Either way, rather than an essay, I end up with something like this to show for an afternoon’s productivity.

 

I’m not sure whether this is an improvement in my critical practice.

But I am happy about this:

 

Hollow Inside

As part of my ongoing interest in contemporary pop-sociological takes on the 1960s, I read historian Theodore Roszak’s The Making of a Counter Culture (1969)Its patronizing tone (“it is the young who have in their own amateurish, even grotesque way, gotten dissent off the adult drawing”>and platitudes about youth’s fundamental yearning for spiritual authenticity and not political change makes for tedious and pedantic reading, but it usefully illustrates how the social threats critics saw in technology have shifted.

Like many critics of the period, Roszak was chiefly worried about the “technocracy,” the emergence of a totally administered society run by hyperrational engineers and bureaucrats, yielding a de facto planned economy that allows no one any genuine autonomy, impetus, or spontaneity. Roszak derives this from Marcuse, but John Kenneth Galbraith’s The New Industrial State seems more like what he has in mind: megacorporations working in tandem with the state to manage growth, impose mass-media-directed social conformity, and nullify dissent or genuinely free expression. His anticipation of the fully integrated industrial state in which totalized organization becomes an end in itself leads him to make some ludicrous statements about capitalism, like this one: “in our society, profit taking no longer holds its primacy as an evidence of organizational success.” If that was ever the case, it certainly isn’t now; neoliberal reform put profit and risk management back at the heart of the capitalist society, such that even charities are supposed to be run like a business.

Neoliberalism reshaped the contours of that sort of “total system” — industrial policy was felled by globalization, as was the truce between labor and capital. Precarity replaced full employment as a guiding ideological principle for economic policy. Roszak’s view that economic security could be “taken for granted” by 1960s youth, thereby freeing them to fight for a more spiritually fulfilling society, seems especially dated. His diagnosis that the period of adolescence is being perpetually prolonged seems correct, but this is not because they are “infantilized” by abundance and ease and tolerant, “pampering” parents, as he suggests. Instead, prolonged adolescence reflects how consumption (adolescence’s chief concern) has become economically productive.

Part of the “postindustrial” emphasis on logistical flexibility and just-in-time production is that the consumer goods market became capable of accommodating a superficial diversity of products, meaning there was a “creative” opportunity for consumers to feed into the customization of goods, which could be relied upon to confer ersatz uniqueness on the consumer.

Deleuze’s essay about the control society gives a good overview of the effect on subjectivity this shift from the “new industrial state” to neoliberalism has had — basically he extends of the implications of Marcuse’s “repressive tolerance” and combines them with Foucault’s notion of “governmentality.” The cybernetic systems don’t make subjects into robotic seeming automatons in control societies; it makes subjects into “dividuals” whose every move is monitored in observatory networks to capture their “innovative” deviations. Control societies impose flexibility rather than conformity, and make creative adaptation on the part of the individual a mandatory requirement. (Be an entrepreneur! Yay!) You are still a cog in the machine, but a self-designed one.

Roszak’s fear of what he calls “objective consciousness” — a sort of technologically minded narrowness that eradicates the humanist pleasures of culture — thus seems misplaced. We haven’t been “depersonalized” but hyperpersonalized by the ways technology has abetted consumerism. So his concerns about people becoming automatons in the following passage, an elaboration on a quote from Jacques Ellul about how technological expert systems are “converging” on the human subject, seem a little misplaced, misdiagnosed:

The final convergence [Ellul] predicts may not have to postpone its completion until the technocracy has acquired mechanisms and techniques that will replace the human being in all areas of our culture. Instead, we may only have to wait until our fellow humans have converted themselves into purely impersonal automatons capable of total objectivity in all their tasks. At that point, when the mechanistic imperative has been successfully internalized as the prevailing life style of our society, we shall find ourselves moving through a world of perfected bureaucrats, managers, operations analysts, and social engineers who will be indistinguishable from the cybernated systems they assist.

This prediction misses the possibility that the cybernated system might demand not conformity but constant innovation within constrained categories — the mechanistic imperative could be: CREATE! Creativity in personal expression is not immune to being bureaucratized. That is what identity-driven social media are about. Social media’s compulsivity is the “successful internalization” of the “mechanistic imperative” as the “prevailing life style of our society.” The passage continues:

Already we find these images of internally deadened human beings appearing in our contemporary novels and films. Dispassionate lovers, dispassionate killers fill the movies of Godard, Truffaut, Antonioni, Fellini with their blank gaze and automatized reactions. So too in the absurdist plays of Harold Pinter and Samuel Beckett we find the logical—or rather psychological—conclusion of life dominated by ruthless depersonalization.

The sense that we are internally dead has taken a new form. The new fear is not that people will be all the exact same “blank” dispassionate drone — people’s identities are more variegated and articulated than ever — but that they will become indistinguishable from their social-media profiles, the mark of a systematized personality. The inner life is barren not because systems thinking has rendered it predictable but because social media have sucked it all out and externalized it.

Here we have the world of completely objectified human relations: people hopelessly locked off from one another, maneuvering their isolated In-Heres around and about each other, communicating only by their externalized behavior. Words become mere sounds, concealing more than they convey; gestures become mere physiological twitches; bodies touch without warmth. Each In-Here confronts the others Out-There with indifference, callousness, exploitive intention. Everyone has become a specimen under the other’s microscope; no one can any longer be sure that anyone else is not perhaps a robot.

Roszak’s prediction of a Blade Runner world where sociality is reduced to our administering Turing Tests on one another seems a bit off, in part because that robotic affect of objective dispassion did not come to pass. But what is social media if not an ongoing effort to prove our humanity to everyone? To prove we exist? The “world of completely objectified human relations” that Roscak feared has established itself, not through deadening Borg-like integration into a homogeneous machine but as the overlay of voluntarily adopted social media over everyday life. Roszak’s description of “people hopelessly locked off from one another” seems the precise opposite of the universe of ubiquitous connectivity we live with, but this connectivity makes the isolation more complete, positioning individuals as networked nodes who link with others without being capable of forming any kind of collective subject.

On Facebook “everyone has become a specimen under the other’s microscope,” and the Horse e-books fiasco suggests that we are more than ready to mistake bots for humans and vice versa. The “automized reactions” Roszak anticipated have come to pass, but the quasi-rational scripts we unthinkingly enact appear to us to be our taking advantage of mechanisms provided to us to assure our freedom of expression. Compulsive sharing in social media is a kind of automized reaction that seems more like autonomous expression, regardless of how constricting the facilitating platform might be.

But it’s probably misleading to talk about what is “robotic” about our social media behavior. It makes more sense to describe what Roszak calls robotic as “calculating.” Social media confront users with a lot of apparent evidence of everyone else’s strategizing and social positioning — their “exploitive intention.” On social media, users are certainly “communicating only by their externalized behavior” — indeed the sites propose that all of identity must be externalized in order to be authentic. By instrumentalizing tokens of identity so thoroughly, social media deny the relevance of other people’s interiority, making that interiority seem somewhat implausible. Why would you think something you couldn’t use? Who among us is deeper than the complexities and contradictions they have layered into their social-media presences?

One’s “depth” is in the externalized text about the self, not in some inexpressible strata of the psyche. But the interpretive tools used to chart this depth depend on the tenets of economistic rationality: a matter of decoding incentives, tracing the attempts to network effectively and claim various forms of authority, make the case for one’s cultural relevance, etc. It becomes harder to imagine we won’t be able to figure out someone else’s “real” motives by sifting through all the social-media clues; it becomes harder to respect the possibility of mystery inhering in some ineffable consciousness within.

As was pointed out by many commentators, people enjoyed the fantasy that the Horse ebooks account was driven by an algorithm and not by human intentionality, which gave them a break from the hermeneutical demands that social media consumption ordinarily imposes on users. You could suspend the need to figure out Horse e-books’s motives for posting and claim any intentionality you saw in it for yourself. In that way, we remain the only relevant acting subject, as Michael Barthel noted here.

Whereas when we are dealing with everyone else’s posts on social media, we have to expend considerable effort figuring out what their angle is in posting it, what kind of audience they want us to be, and whether we are going to react as they intended, grant them the social and/or cultural capital we have to assume they are looking for. A bot can seem truly “authentic” in our current climate because it is by definition incapable of having a covert motive; it is as disinterested as any Kantian could hope for.

We didn’t just want Horse ebooks to be a bot; we secretly want everyone else on social media to be bots.

When we examine each other on social media, we are not afraid to discover the other is a bot; we’re continually hoping for it. If the other proves to be a robot, we have no “infinite responsibility” toward it; instead we can just assimilate to our own personal identity strategies. Roszak’s mistake is in assuming that people would be generally dismayed at the prospect of being “hopeless locked off from one another” rather than feeling empowered by it.