twitter
facebook twitter tumblr newsletter
blog-marginal-174
Marginal Utility
By Rob Horning
A blog about consumerism, capitalism and ideology.
rss feed

Simple and Plain

Screen Shot 2015-01-08 at 4.16.41 PM

Today is Elvis Presley’s birthday. He would have been 80. Most people accept that he died in 1977, at the age of 42, which means I am older now than he ever was, a fact I have a hard time wrapping my head around.

I’m currently reading Careless Love, the second volume of Peter Guralnick’s biography of Elvis, and it is bringing me down. It’s about how fame was a collective punishment we administered to Elvis, which he would not survive. Fame allowed him to coast along when he should have been stretching himself; like a gifted child praised too much too soon, it made him incapable of coping with challenges. Fame allowed his manager, Colonel Parker, to construe Elvis’s talent as a cash machine. Parker encouraged in Elvis a zero-sum attitude toward his art, so that he demanded as much money he could get for output as superficial as they could make it, as if the shallowness implied savings, a better bargain from the forces who commercialized him. Fame transformed Elvis into a kind of CEO who inhabited his own body as if it were a factory, a capital stock, on which an enormous and ever-mutating staff relied upon for their livelihood. As a consequence, fame isolated him completely. His friends, no matter how much they loved and respected him, remained a paid entourage whom he could never completely believe actually loved him for real. “He constructed a shell to hide his aloneness, and it hardened on his back.” Guralnick writes in the introduction.  ”I know of no sadder story.”

I first got into Elvis after stopping at Graceland, his home in Memphis, during my first road trip across the U.S., in 1990. I knew very little about him, just what you sort of absorbed by osmosis from the culture. Elvis impersonators were probably more salient than Elvis himself at that point. My grandmother, I remember, had some of his later records: Moody Blue; Aloha From Hawaii via Satellite. I wanted to stop at Graceland because I thought it would be campy fun; I wanted to re-create the scene in Spinal Tap when they experience “too much fucking perspective” at Elvis’s graveside.

But Graceland was surprisingly somber, straddling the line between pathos and bathos, never letting me take comfort in either territory. It didn’t seem right to laugh when confronted with the meagerness of the vision of someone who could have had anything but chose Naugahyde, thick shag rugs, and equipping rooms with dueling TV sets. And it was genuinely humbling to recognize the desperation in it all, the dawning sense that Elvis had nowhere to turn for fulfillment and had none of the excuses we have (lack of time and resources, lack of talent) to avoid confronting inescapable dissatisfaction head on.

In one of the stores in the plaza of gift shops across the street from Graceland, I bought a TCB baseball hat and cassette of Elvis’s first RCA album, the one whose design the Clash mimicked for London Calling.

elivis

Every time it was my turn to drive, I put the tape on; listening to “Blue Moon” while driving through the vacuous darkness of Oklahoma was the first time I took Elvis seriously as a performer, the first time I heard something other than my received ideas about him. Then, like a lot of music snobs, I got into the Sun Sessions and the other 1950s stuff and declared the rest of his career irrelevant, without really knowing anything about it. In recent years, I have overcorrected for that and listened mainly to “fat Elvis” — the music he made after the 1968 Comeback. I’m amazed by moments like this, a 1970 performance of “Make the World Go Away.” Wearing a ludicrous white high-collar jumpsuit with a mauve crypto-karate belt around his waist, he mumbles a bit, tells a lame joke about Roy Acuff that nobody gets, saunters over the side of the stage to drink a glass of water while the band starts the saccharine melody, then out of nowhere hits you with the first lines, his voice blasting out, drawing from a reserve of power that quickly dissipates. Then he skulks around the stage, visibly antsy, as if trying to evade the obvious relevance of the song’s lyrics to his sad, overburdened life.

I never paid any attention to 1960s Elvis, but now, reading through Guralnick’s dreary, repetitive accounts of Elvis’s month-to-month life in the 1960s, when he flew back and forth mainly between Memphis, Los Angeles, and Las Vegas as he accommodated a relentless film-production schedule — he made 27 movies from 1960 to 1969 — fills me with an urgent desire to somehow redeem this lost era of his career, to study it and find the obscured genius in it, to rescue it through some clever and counterintuitive readings of his films or the dubious songs he recorded for them. I just don’t want to believe that Elvis wasted the decade; I don’t want to accept that talent can indeed be squandered, that instead it finds perverse ways to express itself even in the grimmest of circumstances. But this was an era when he was cutting material like “No Room to Rhumba in a Sports Car” (Fun in Acapulco), “Yoga Is as Yoga Does” (Easy Come, Easy Go), “Do the Clam” (Girl Happy), ”Queenie Wahine’s Papaya” (Paradise, Hawaiian Style), and “Song of the Shrimp” (Girls! Girls! Girls!). I’m not sure it’s all that helpful to pursue a subversive reading of Clambake. What there is to see in Elvis’s movies is doggedly on the surface; as Guralnick makes clear, these films were made by design to defy the possibility of finding depth in them.

At best, a case can be made for appreciating Elvis’s sheer professionalism in this era, his refusal to sneer publicly at material far beneath him. Sure, he was on loads of pills, and the epic-scale malignant narcissism of his offscreen behavior was establishing the template for all the coddled superstars to come. But he wasn’t a phony. If he was cynical, it was a hypercynicism that consisted of an unflaggingly dedicated passion for going through the motions. Guralnick describes Elvis in some of these films as being little more than movable scenery, a cardboard cutout, but he is a committed cardboard cutout. A bright empty shell with a desultory name and job description (usually race-car driver) attached, Elvis wanders through an endless series of unconvincing backdrops, reflecting back to us the cannibalizing effects of fame, inviting us try to eat the wrapper of the candy we already consumed.

Tim Burton’s “Big Eyes”

FB series9Tim Burton’s Big Eyes makes a strong case that Walter Keane was a first-order marketing genius and his wife Margaret, whose paintings he appropriated and promoted as if they were his own, used his marketing talents up until the moment she could safely dispense with them. Given that Margaret Keane apparently cooperated with the making of Big Eyes (she painted Burton’s then wife Lisa Marie in 2000, and I think she appears at the end of the film alongside Amy Adams, who plays her), this seems sort of surprising. On the surface, the movie tells the story of her artistic reputation being rightly restored, but that surface is easily punctured with a moment’s consideration of the various counternarratives woven into the script. Then we are dealing with a film about a visionary who turned his wife’s hackneyed outsider art into one of the most popular emblems of an era and who has since been neglected and forgotten, despite inventing art-market meta-strategies that have since become ubiquitous.  The movie seems to persecute Walter because the filmmakers believed it was the only way they could get us to pay enough attention to him to redeem him.

I went in to see Big Eyes expecting a cross between Burton’s earlier Ed Wood and Camille Claudel, the biopic about the sculptor whose career was overshadowed by her romantic relationship with Rodin, whom she accused of stealing her ideas. That is, I thought it would be about how female artists have struggled for adequate recognition, only played out in the register of kitsch pop art. I figured Burton would try to capture something of whatever zany, intense passion drove Margaret Keane to make her “big eye” paintings, much as he had captured Ed Wood’s intensity in the earlier film. We would see a case made for the legitimacy of Margaret’s work, which is now often seen as campy refuse, maudlin junk you might buy as a joke at a thrift store, at the same level as Love Is… or Rod McKuen poetry books.

But Burton doesn’t make much of an effort to vindicate Margaret on the level of her art. No explanation is suggested for why she paints or why audiences connected to her work. Rather than giving the impression that no explanation is necessary, that its quality speaks for itself, this omission has the effect of  emphasizing the film’s suggestion that the significance of her painting rests with the innovative job Walter performed in getting people to pay attention to it, operating outside the parameters of the established art world. Meanwhile, Margaret’s genius remains elusive, as unseeable as it was when Walter effaced it. Margaret is a bit of a nonentity in the film, locked in a studio smoking cigarettes and grinding out paintings at her husband’s command, much as if she were one of Warhol’s Factory minions, while Walter is shown as a dynamic, irresistible figure who comes up with all the ideas for getting her work to make a stamp on the world. In fact, in the script, Burton likens Walter to Warhol multiple times and the movie even opens with a Warhol quote (from this 1965 Life article) in which he praises Walter Keane: “I think what Keane has done is just terrific. It has to be good. If it were bad, so many people wouldn’t like it.”

Since this quote came before seeing anything of the story, I took it as Burton’s attempt to use a name-brand artist’s imprimatur to validate Margaret’s work in advance for movie audiences who possibly wouldn’t read any irony in Warhol’s statement — Burton could laugh at his audiences and show his contempt for their expectations by rotely fulfilling them, as he had with Mars Attacks and the Planet of the Apes remake. But (as usual) I was being too cynical. Afterward, I started to think Burton was in earnest in choosing this quote, and that Big Eyes is instead subverting the expectations liberal audiences might have of it being a stock feminist redemption story. It mocks those audiences, mocks the indulgence involved in using depictions of the past to let ourselves believe we have now somehow transcended the bad old attitudes of sexism. The somewhat smug and self-congratulatory view that “Nowadays we would accept Margaret Keane as a real artist and see through Walter Keane’s tricks” is complicated by the fact that Margaret’s art is kitsch and that Walter’s tricks come not at the expense of art but are instead the sorts of things that nowadays chiefly constitute it.

Margaret is depicted as the victim of Walter’s exploitation, but that view is too simplistic for the film that ostensibly conveys it. It makes Margaret passive, intrinsically helpless, easily manipulated. So simultaneously, Big Eyes gives a convincing portrait not of Margaret’s agency, as you might expect, but of Walter as a passionate, misunderstood genius, a Warhol-level artist working within commercialism as a genre, doing art marketing as art itself with the flimsiest of raw materials and executing a conceptual performance piece about identity, appropriation, cliches, and myths about creativity’s sources that spanned a decade. When the script has Walter claim having Walter claim that he invented Pop Art and out-Warholed Warhol with his aggressive marketing strategies, we can read it “straight” within Margaret’s redemption story as a sign of Walter’s rampant egomania. But the film actually makes a solid case for that being plausible, stressing how Keane was able to bring art into the supermarket before Warhol brought the supermarket into art.

Similarly, when Margaret discovers that the Parisian street scenes Walter claimed were his own while wooing her were actually painted by someone else and shipped to him from France, she is shocked, and we are seemingly supposed to share in this shock and feel appalled. But it makes as much sense to want to applaud his audacity and ingenuity, his apparent ability to assemble and assume the identity of an artist without possessing any traditional craft skills at all. He’s sort of the ur-postinternet artist.

All of Big Eyes is shot as if the material has been viewed naively through the child-like big eyes of one of Margaret’s subjects, a perspective from which Walter’s acts just seem selfish and insane. But Burton is careful to allow viewers to regard the action from a more sophisticated perspective, which reads between the lines of what is shown and looks beyond the emotional valences of the surface redemption story being told. Margaret’s character always acknowledges Walter’s marketing acumen in the midst of detailing his misdeeds, and she never explains why she helped Walter perpetrate his fraud, other than to say, “He dominated me.” From what Burton shows and has Margaret say, this domination is less a matter of intimidation than charm. As awful as his behavior might have been in reality, Walter is little more than a cartoon villain in the film’s melodramatic domestic scenes; the misdeeds Burton depicts are Walter’s getting drunk and belligerently accusing one of Margaret’s friends of snobbery for rejecting representational art, and his flicking matches at Margaret and her daughter when he is disappointed about her work’s reception.

Of course, Walter’s primary crime is making Margaret keep her talent a secret (an open secret, apparently) — “from her own daughter!” even. He capitalizes on a sexist culture to take credit for Margaret’s ability, and then uses the specter of that sexist culture to control her, while more fully enjoying the fruits of what her ability brought them — the fame, the recognition, the celebrity hobnobbing, and so on. But Big Eyes also makes a point of undermining that perspective to a degree, making it clear that Margaret (in part because of that same sexist culture) never would have had the gumption to make a career out of painting without Walter’s support, and certainly she wouldn’t have been able to follow through with all the self-promotion necessary to sustain an art career and allow it to thrive. We are told that she didn’t want the spotlight; at the same time we are supposed to see her being denied the spotlight as part of her victimization. Walter helped created conditions in which Margaret could paint as much as he exploited the inequities of those conditions. And Margaret triumphed in ways that go far beyond the limited accomplishment of earnest “self-expression.”

During the trial scene, which is supposed to be Margaret’s ultimate vindication, one instead gets a sense through her testimony, and Walter’s outlandish performance as his own lawyer, that he will stop at nothing to put across his vision of the world and himself, despite not having any talent with traditional materials of representation. Doesn’t that make him the greater artist, the film seems to suggest, that he can use other people as his medium? All Margaret can apparently do is the parlor trick of making a big-eye painting in an hour in the courtroom. Whereas Walter could get Life magazine to interview him and tell his story, he could contrive elaborate background for how he suddenly came to paint waifs and kittens, and he could get his wife to willingly make all of his work for him and let him sign it as his own.

It is hard to walk away from Big Eyes without wondering just how much Margaret and Walter collaborated on the character of “Keane,” the artist who made compelling kitsch, and it’s hard not to feel sorry for him when before the ending credits we are shown a picture of the real Walter Keane, with text explaining how he died broke and penniless while he continued to insist on his own artistic genius. I wondered if in working with Burton, Margaret wasn’t still covertly collaborating with Walter, muddying the waters around their life’s work and letting some ambiguity flourish there. This impression, more than anything depicted explicitly in the film, gave me the strongest sense of Margaret’s character, beyond cliches of resiliency and self-actualization.

 

Selfies without the self

hey you fuckfaceTaking selfies is routinely derided as narcissistic, a procedure of solipsistic self-regard in which one obsesses over one’s own image. But selfies are not solipsistic; they are only selfies if they circulate. The term selfie not only labels an image’s content (though this usage is slipping, as when TD Bank invites me to “take a check selfie” to deposit it), but it also describes a distribution process. Selfie is shorthand not just for pictures you take of yourself but instead for one’s “self in social media” – one’s self commoditized to suit the logistics of networks.

As art critic Brian Droitcour writes:

Producing a reflection of your image in Instagram always involves an awareness of the presence of others, the knowledge that your selfie is flaking and refracting in their phones. Labeling this reflection #selfie tacitly recognizes the horizontal proliferation of reflections, the dissolution of personhood in the network. The real narcissists are the ones who never take selfies. They imagine their self as autonomous, hermetic—too precious to be shared.

If selfies are not narcissistic, sometimes they are perceived to be the opposite, too performatively strategic. Posting selfies is often seen as part of an effort to build social capital, an effort to deploy the self in a social network to gain attention, reputation, influence, and so on. It instrumentalizes self-representation; selfies are a way to explicitly conflate ourselves with objects to be manipulated. If Droitcour is right, selfies, when they enter circulation, aren’t a matter self-expression (as their defenders sometimes claim) but self-surrender. This could be a precursor to moving past the political limits of individualism, yet selfies nonetheless exemplify an instrumental attitude toward the self that may block intersubjectivity. You can “flake or refract” me, selfies seem to say, but only at the level of these images. This locks the terms of interpersonal engagement at the level of image exchange.

Selfies may be mistaken for autonomous self-expression: an assertive, short-circuiting gesture that recuperates the communication/surveillance platforms that otherwise contain the self. But selfies don’t tap a suppressed inner essence; they develop the “self” as an artisanal product line. What they express depends less on what they depict then on how well they circulate, what uses they are put to within networks.

The selfie commemorates the moment when external social control — the neoliberal command to develop a self as a kind a capital stock and serially reproduce oneself in self-advertisements — is internalized as crypto-defiance. I’m not going to consume their images, I’m going to make one of my own, take control of how I’m seen!

With selfies we can think we are asserting an agency that escapes control, though this is control’s exact contemporary mechanism: producing ourselves as an object for the network, performing the obligatory work of identity construction in a captured, preformatted space. Selfies, then, primarily signal the availability of the self to the network.

The practice of selfie-making doesn’t eradicate the infrastructure of commercially exploitable identity that is embedded in the media tools for “expressing” it. The selfie doesn’t invent a language of identity; it marks a voluntary entry into established codes, reinforcing their validity even if a particular selfie tries to subvert them.

Alexander Galloway claims that the economic mobilization of self-production that selfies epitomize have prompted a new “politics of disappearance”:

The operative political question today, thus, in the shadow of digital markets is … the exodus question: first posed as “what are we going to do without them?” and later posed in a more sophisticated sense as “what are we going to do without ourselves?”

Maybe selfies are a step in the direction of answering that. The selfie is sometimes condemned for its inauthenticity, but in its explicit constructedness, the selfie may herald the emergence of a postauthentic self: a overtly manufactured self that is confirmed and rendered coherent in an audience’s reactions and always changing with each image, as opposed to a static “real self.”

In other words, selfies assault the notion of autonomous, persistent, transcendent identity. Intentionally or not, the willingness to take them and share them demonstrates you don’t believe in the “authentic” self inside but instead in the desire to be remade anew in any given moment. Selfie taking recognizes that the notion of the “self” always implies another’s point of view on it, a perspective that generates it. The act of taking a selfie simulates and evokes that outside point of view. It makes our self real to us, something we can experience and consume, at the expense of pretending to be someone else as we look.

The selfie breaks us out of the cage of static identity, but the platforms they are posted to shove us back in, associating and attempting to integrate all the data they generate. The platforms affirm that I’m a discrete self, one baseball card in their pack, with my statistics always printed on the back.

Social Media Is Not Self-Expression

Screen Shot 2014-11-14 at 1.41.06 PM

1. Subjectivation is not a flowering of autonomy and freedom; it’s the end product of procedures that train an individual in compliance and docility. One accepts structuring codes in exchange for an internal psychic coherence. Becoming yourself is not a growth process but a surrender of possibilities that we learn to regard as egregious, unbecoming. “Being yourself” is inherently limiting. It is liberatory only in the sense of freeing one temporarily from existential doubts. (Not a small thing!) So the social order is protected not by preventing “self-expression” and identity formation but encouraging it as a way of forcing people to limit and discipline themselves — to take responsibility for building and cleaning their own cage. Thus, the dissemination of social-media platforms becomes a flexible tool for social control. The more that individuals express through these codified, networked, formatted means to construct a “personal brand” identity, the more they self-assimilate, adopting the incentive structures of capitalist social order as their own. (The machinations of Big Data make this more obvious. The more data you supply, the more the algorithms can determine your reality.) Expunge the seriality built into these platforms, embrace a more radical form of difference.

2. In an essay about PJ Harvey’s 4-Track Demos, Michael Barthel writes:

while she was able to hole up in a seaside restaurant and produce a masterpiece, I need constant feedback and encouragement in order not to end up curled in some dark corner of my house, eating potato chips and refreshing my Tumblr feed in the hope that someone will have “liked” my Photoshopped picture of Kanye West in a balloon chair.

He’s being a bit facetious, but this is basically what I’m trying to get at above: the difference between an inner-directed process of discovery and a kind of outer-directed pseudo-creativity that in its pursuit of attention gets overwhelmed by desperation. I’m trading in a very dubious kind of dichotomizing here, I know — artists make a lot of great work for no greater purpose than attention-seeking, and the idea that anything is truly “inner-directed” may be a ideological illusion, given how we all develop interiority in relation to a social world that precedes us and enables us to survive. But what I am trying to emphasize here is how production in social media is often sold to users of these platforms as self-expressive creativity, as self-discovery, as an elaboration of the self even, but it is really a narrowing of the self to the reductive, defensive aim of getting recognition, reassurance of one’s own existence, that one belongs. That kind of “creativity” may crowd out the more antisocial kind that may entail reclusion, social disappearance, indifference to reputation and social capital, to being someone in particular in a network. Self-invention in social media that is perpetually in search of “feedback” is really just the production of communication, which gives value not to the self but to the network that gets to carry more data (and store it, and sell it).

Actual “self-invention” — if we are measuring it in range of expressivity — appears more like self-dissolution. We’re born into social life and shaped by it; self-discovery may thus entail a destruction of social bonds, not a sounding of them.

Barthel lauds the “demos, experiments, collaborative public works, jokes, notes, reading lists, sketches, appreciations, outbursts of pique” that are “absolutely vital to continuing the business of creation.” But the degree that these are all affixed to a personal brand when serially broadcast on social media depletes their vitality. If PJ Harvey released the demos as she made them to a Myspace page, would there ever have been a finished Rid of Me? Would the end product merely have been PJ Harvey, as the fecund musician?

Social media structure creative effort (e.g., Barthel’s list above) ideologically as “self-creating,” but they often end up as anxiety-inducing, exposing the self’s ad hoc incompleteness while structuring the demand for a fawning audience to complete us, validate every effort, as a natural expectation. Validation is nice, but as a goal for creative effort, it is somewhat limited. The quest for validation must inevitably restrict itself to the tools of attracting attention: the blunt instruments of novelty and prurience  (“Kanye West in a balloon chair”). The self one tries to express tends to be new, exciting, confessional, sexy, etc., because it plays as an advertisement. Identity is a series of ads for a product that doesn’t exist.

The process can’t quell anxiety; this kind of self-expression can only intensify it, focus it onto a few social-media posts that await judgment, narrow it to the latest instances of sharing. Social media’s quantifying metrics aggravate the problem, making expression into a series of discrete items to be counted, ranked. It serves as the infrastructure for a feedback loop that orients expression toward the anxiety of what the numbers will be and accelerates it, as we try to better those numbers, and thereby demonstrate that the self-monitoring is teaching us something about how to become more “relevant.”

The alternative would seem to be a sort of deep focus in isolation, in which one accepts the incompleteness that comes from being apart from an audience, that comes from not seeking final judgment on what one is doing and letting it remain ambiguous, open-ended, of the present moment and not assimilated to an archive of identity. To put that tritely: The best way to be yourself is to not be anybody in particular but to just be.

3. So is the solution to get off the Internet? If social media structure social behavior this way, just don’t use them, right? Problem solved. Paul Miller’s 2013 account at the Verge of his year without Internet use suggests it’s not so simple. Miller went searching for “meaning” offline, fearing that Internet use was reducing his attention span and preoccupying him with trivia. It turns out that, after a momentary shock of having his habits disrupted, Miller fell back into the same feelings of ambient discontent, only spiked with a more intense feeling of loneliness. It’s hard to escape the idea of a “connected world” all around you, and there is no denying that being online metes out “connectedness” in measured, addictive doses. But those doses contain real sociality, and they are reshaping society collectively. Whether or not you use social media personally, your social being is affected by that reshaping. You don’t get to leave all of society’s preoccupations behind.

Facebook is possibly more in the foreground for those who don’t use it than for those who have accepted it as social infrastructure. You have to expend more effort not knowing a meme than letting it pass through you. Social relations are not one-way; you can’t dictate how they are on the basis of personal preference. As Miller puts it, describing his too-broad, too pointed defiance of the social norms around him, “I fell out of sync with the flow of life.” Pretending you can avoid these social aspects of life because they are supposedly external, artificial, inauthentic, and unreal, is to have a very impoverished idea of reality, of authenticity, of unique selfhood.

The inescapable reciprocity of social relations comes into much sharper relief when you stop using social media, which thrive on the basis of the control over reciprocity they try to provide. They give a crypto-dashboard to social life, making it seem like a personal consumption experience, but that is always an illusion, always scattered by the anxiety of waiting, watching for responses, and by the whiplash alternation between omnipotence and vulnerability.

Miller’s fable ends up offering the lesson that the digital and the physical are actually interpenetrated, and all the personal problems he recognizes in himself aren’t a matter of technologically mediated social reality but are basically his fault. This seems too neat of a moral to this story. Nothing is better for protecting the status quo than convincing people that their problems are their own and are entirely their personal responsibility. This is basically how neoliberalism works: “personal responsibility” is elevated over the possibility of collective action, a reiteration of requirement to “express oneself” as an isolated self, free of social determination, free for “whatever.”

What is odd is that the connectivity of the internet exacerbates that sort of neoliberal ideology rather than mitigating it. Connectivity atomizes rather than collectivizes. But that is because most people’s experience of the internet is mediated by capitalist entities, or rather, for the sake of simplicity, by capitalism itself. You can go offline, but that doesn’t remove you from the alienating properties of life in capitalist society. So the same “personal problems” the Internet supposedly made you experience still exist for you if you go offline, because you are still in a capitalist society. Capitalist imperatives are still shaping your subjectivity, structuring your time and your experience of curiosity, leisure, work, life. The internet is not the problem; capitalism is the problem.

Social media offer a single profile for our singular identity, but our consciousness comprises multiple forms of identity simultaneously: We are at once a unique bundle of sense impressions and memories, and a social individual imbued with a collectively constructed sense of value and possibility. Things like Facebook give the impression that these different, contestable and often contradictory identities (and their different contexts) can be conveniently flattened out, with users suddenly having more control and autonomy in their piloting through everyday life. That is not only what for-profit companies like Facebook want, but it is also what will feel natural to subjects already accustomed to capitalist values of convenience, capitalist imperatives for efficiency, and so on.

So Miller is right to note that “the internet isn’t an individual pursuit, it’s something we do with each other. The internet is where people are.” That’s part of why simply abandoning it won’t enhance our sense of freedom or selfhood. But because we “do” the internet with each other as capitalist subjects, we use it to intensify the social relations familiar from capitalism, with all the asymmetries and exploitation that comes with it. We “do” it as isolated nodes, letting social-media services further suppress our sense of collectivity and possibility. The work of being online doesn’t simply fatten profits for Facebook; it also reproduces the condition that make Facebook necessary. As Lazzarato puts it, immaterial “labour produces not only commodities, but first and foremost the capital relationship.”

4. Exodus won’t yield freedom. The problem is not that the online self is “inauthentic” and the offline self is real; it’s that the self derived from the data processing of our digital traces doesn’t correspond with our active efforts to shape an offline/online hybrid identity for our genuine social ties. What seems necessary instead is a way to augment our sense of “transindividuality,” in which social being doesn’t come at the expense of individuality. This might be a way out of the trap of capitalist subjectivity, and the compulsive need to keep serially producing in a condition of anxiety to seem to manifest and discover the self as some transcendent thing at once unfettered by and validated through social mediation. Instead of using social media to master the social component of our own identity, we must use them to better balance the multitudes within.

Preemptive personalization

Screen Shot 2014-09-11 at 12.18.57 PM

Nicholas Carr’s forthcoming The Glass Cage, about the ethical dangers of automation, inspired me to read George Orwell’s The Road to Wigan Pier (1937), which contains a lengthy tirade against the notion of progress as efficiency and convenience. Orwell declares that “the tendency of mechanical progress is to make life safe and soft.” It assumes that a human being is “a kind of walking stomach” that is interested only in passive pleasure rather than work: “whichever way you turn there will be some machine cutting you off from the chance of working — that is, of living.” Convenience is social control, and work, for Orwell at least, is the struggle to experience a singular life. But the human addiction to machine-driven innovation and automation, he predicts, fueled apparently by a fiendish inertia that demands progress for progress’s sake, will inevitably lead to total disempowerment and dematerialization:

There is really no reason why a human being should do more than eat, drink, sleep, breathe, and procreate; everything else could be done for him by machinery. Therefore the logical end of mechanical progress is to reduce the human being to something resembling a brain in a bottle.

Basically, he sees the Singularity coming and he despises it as a “frightful subhuman depth of softness and helplessness.” And there is no opting-out:

In a healthy world there would be no demand for tinned foods, aspirins, gramophones, gaspipe chairs, machine guns, daily newspapers, telephones, motor-cars, etc., etc.; and on the other hand there would be a constant demand for the things the machine cannot produce. But meanwhile the machine is here, and its corrupting effects are almost irresistible. One inveighs against it, but one goes on using it.

This “brain in the bottle” vision of our automated future, Orwell surmises, is why people of the 1930s were wary of socialism, which he regards as being intimately connected ideologically with the theme of inevitable progress. That connection has of course been severed; socialism tends to be linked with nostalgia and tech’s “thought leaders” tend to champion libertarianism and cut-throat competitive practices abetted by technologically induced asymmetries, all in the name of “innovation” and “disruption.”

Oddly, Orwell argues that the profit motive is an impediment to technological development:

Given a mechanical civilization the process of invention and improvement will always continue, but the tendency of capitalism is to slow it down, because under capitalism any invention which does not promise fairly immediate profits is neglected; some, indeed, which threaten to reduce profits are suppressed almost as ruthlessly as the flexible glass mentioned by Petronius … Establish Socialism—remove the profit principle—and the inventor will have a free hand. The mechanization of the world, already rapid enough, would be or at any rate could be enormously accelerated.

Orwell seems to imagine a world with a fixed amount of needs, which technology will allow to be fulfilled at the expense of less labor; he imagines technology will make useful things more durable rather than making the utility we seek more ephemeral. But technology, as directed by the profit motive, makes obsolescence into a form of innovation; it generates new wants and structures disposability as convenience rather than waste. Why maintain and repair something when you can throw it away and shop for a replacement — especially when shopping is accepted to be a fun leisure activity?

While Orwell is somewhat extreme in his romanticizing of hard work — he sounds downright reactionary in his contempt for “laziness,” and can’t conceive of something as banal as shopping as a rewarding, self-defining effort for anyone — people today seem anything but wary about technological convenience, even though it is always paired with intensified surveillance. (The bathetic coverage of Apple’s marketing events seems to reflect an almost desperate enthusiasm for whatever “magical” new efficiencies the company will offer.) Socialism would be far more popular if people really thought it was about making life easier.

Orwell associated automation with socialism’s utopian dreams, and thought the flabbiness of those dreams would drive people to fascism. Looking back, it seems more plausible to argue that automation has become a kind of gilded fascism that justifies itself and its barbarities with the efficiencies machines enable. Though we sometimes still complain about machines deskilling us, we have nonetheless embraced once unimaginable forms of automation, permitting it to be extended into how we form a conception of ourselves, how we come to want anything at all.

One might make this case for automation’s insidious infiltration into our lives: First, technology deskilled work, making us machine monitors rather than craft workers; then it deskilled consumption, prompting us to prefer “tinned food” to some presumably more organic alternative. Now, with the tools of data collection and algorithmic processing, it deskills self-reflection and the formation of desire. We get preemptive personalization, as when sites like Facebook and Google customize your results without your input. “Personalization” gets stretched to the point where it leaves out the will of the actual person involved. How convenient! So glad that designers and engineers are making it easier for me to want things without having to make the effort of actually thinking to want them. Desire is hard.

Preemptive personalization is seductive only because of the pressure we experience to make our identities unique — to win the game of having a self by being “more original” than other people. That goal stems in part from the social media battlefield, which itself reflects a neoliberal emphasis on entrepreneurializing the self, regarding oneself as leading an enterprise, not living a life. If “becoming yourself” was ever a countercultural goal, it isn’t anymore. (That’s why Gap can build an ad campaign around the proposition “Dress Normal.” Trying to be distinctive has lost its distinction.) It’s mandatory that we have a robust self to express, that we create value by innovating on that front. Otherwise we run the risk of becoming economic leftovers.

Yet becoming “more unique” is an impossible, nonsensical goal for self-actualization: self-knowledge probably involves coming to terms with how generic our wants and needs and thoughts are, and how dependent they are on the social groups within which we come to know ourselves, as opposed to some procedure of uncovering their pure idiosyncrasy. The idea that self-becoming or self-knowledge is something we’d want to make more “convenient” seems counterproductive. The effort to be a self is its own end. That is what Orwell seemed to think: “The tendency of mechanical progress, then, is to frustrate the human need for effort and creation.”

But since Orwell’s time, the mechanization process has increasingly become a mediatization/digitization process that can be rationalized as an expansion of humans’ ability to create and express themselves. Technological development has emphasized customization and personalization, allowing us to use consumer goods as language above and beyond their mere functionality. (I’ll take my iWatch in matte gray please!) Social media are the farthest iteration of this, a personalized infosphere in which our interaction shapes the reality we see and our voice can directly reach potentially vast audiences.

But this seeming expansion of our capacity to express ourselves in in the service of data-capture and surveillance; we embed ourselves in communication platforms that allow our expression to be used to curtail our horizons. Preemptive personalization operates under the presumption we are eager to express ourselves only so that we may be done with the trouble of it once and for all, once what we would or should say can be automated and we can simply reap the social benefits of our automatic speech.

Social media trap us in a tautological loop, in which we express ourselves to be ourselves to express ourselves, trying to claim better attention shares from the people we are ostensibly “connecting” with. Once we are trying to “win” the game of selfhood on the scoreboard of attention, any pretense of expressing an “inner truth” (which probably doesn’t exist anyway) about ourselves becomes lost in the rush to churn out and absorb content. It doesn’t matter what we say, or if we came up with it, when all that matters is the level of response. In this system, we don’t express our true self in search of attention and confirmation; instead attention posits the true self as a node in a dynamic network, and the more connections that run through it, the more complete and “expressed” that self is.

When we start measure the self, concretely, in quantified attention and the density of network connectivity rather than in terms of the nebulous concept of “effort,” it begins to make sense to accept algorithmic personalization, which reports the self to us as something we can consume. The algorithm takes the data and spits out a statistically unique self for us, that lets us consume our uniqueness as as a kind of one-of-a-kind delicacy. It masks from us the way our direct relations with other people shape who are, preserving the fantasy we are sui generis. It protects us not only from the work of being somebody — all that tiring self-generated desire — but more insidiously from the emotion work of acknowledging and respecting the ways our actions have consequences for other people at very fundamental levels of their being. Automated selfhood frees us from recognizing and coping with our interdependency, outsourcing it to an algorithm.

The point of “being unique” has broadened; it is a consumer pleasure as well as a pseudo-accomplishment of self-actualization. So all at once, “uniqueness” (1) motivates content production for social-media platforms, (2) excuses intensified surveillance, and (3) allows filter bubbles to be imposed as a kind of flattery (which ultimately isolates us and prevents self-knowledge, or knowledge of our social relations). Uniqueness is as much a mechanism of control as an apparent expression of our distinctiveness. No wonder it’s been automated.