twitter
facebook twitter tumblr newsletter
blog-marginal-174
Marginal Utility
By Rob Horning
A blog about consumerism, capitalism and ideology.
rss feed

Permanent Recorder

Screen Shot 2015-03-05 at 1.15.22 PMIt used to be easy to mock reality TV for having nothing to do with actual reality — the scenarios were contrived and pre-mediated, the performances were semi-scripted, the performers were hyper-self-conscious. These shows were more a negation of reality than a representation of it; part of their appeal seemed to be in how they helped clarify for viewers the genuine “reality” of their own behavior, in contrast with the freak shows they were seeing on the screen. To be real with people, these shows seemed to suggest, just don’t act like you are on television.

But now we are all on television all the time. The once inverted anti-reality of reality TV has turned out to be prefigurative. In a recent essay for the New York Times, Colson Whitehead seizes on the reality TV conceit of a “loser edit” — how a shows’ editors pare down and frame the footage of certain participants to make their incipient failure seem deserved — and expands it into a metaphor for our lives under ubiquitous surveillance.

The footage of your loser edit is out there as well, waiting … From all the cameras on all the street corners, entryways and strangers’ cellphones, building the digital dossier of your days. Maybe we can’t clearly make out your face in every shot, but everyone knows it’s you. We know you like to slump. Our entire lives as B-roll, shot and stored away to be recut and reviewed at a moment’s notice when the plot changes: the divorce, the layoff, the lawsuit. Any time the producers decide to raise the stakes.

Whitehead concludes that the important thing is that everyone gets an edit inside their own head, which suggests that the imposition of a reality TV frame on our lives has been clarifying. “If we’re going down, let us at least be a protagonist, have a story line, not be just one of those miserable players in the background. A cameo’s stand-in. The loser edit, with all its savage cuts, is confirmation that you exist.” Reality TV models for us what it is like to be a character in our own life story, and it gives us a new metaphor for how to accomplish this — we don’t need to be a bildungsroman author but instead a savvy cutting-room editor. Accept that your life is footage, and you might even get good at making a winner’s edit for yourself.

You could draw a similar conclusion from Facebook’s Timeline, and the year-in-review videos the company has taken to making of one’s raw profile data. These aren’t intrusive re-scriptings of our experience but instructional videos into how to be a coherent person for algorithms — which, since these algorithms increasingly dictate what others see of you, is more or less how you “really” are in your social networks. Facebook makes the winner’s edit of everybody, because everyone supposedly wins by being on Facebook. Everyone gets to be connected and the center of the universe simultaneously. So why not bequeath to it final-cut rights for your life’s edit?

Tech consultant Alistair Croll, in post at O’Reilly Radar, is somewhat less complacent about our surrendering our editing rights. He makes the case that since everyone henceforth will be born into consolidated blanket surveillance, they will be nurtured by a symbiotic relationship with their own data timeline. “An agent with true AI will become a sort of alter ego; something that grows and evolves with you … When the machines get intelligent, some of us may not even notice, because they’ll be us and we’ll be them.”

In other words, our cyborg existence will entail our fusion not with some Borg-like hive mind that submerges us into a collective, but with a machine powered by our own personal data that represents itself as already part of ourselves. The algorithms will be learning how to edit our lives for us from the very start, and we may not recognize this editing as stemming from an outside entity. The alien algorithms ease themselves into control over us by working with our uniquely personal data, which will feel inalienable because it is so specifically about us, though the very fact of its collection indicates that it belongs to someone else. Our memories will be recorded by outside entities so thoroughly that we will intuitively accept those entities as a part of us, as an extension of the inside of our heads. Believing that something that is not us could have such a complete archive of our experiences may prove to be to unacceptable, too dissonant, too terrifying.

Croll argues that this kind of data-driven social control, with algorithms dictating the shape and scope of our lives for us, will be “the moral issue of the next decade: nobody should know more about you than you do.“ That sounds plausible enough, if you take it to mean (as Croll clearly does) that no one should use against you data that you don’t know has been collected about you. (Molly Knefel discusses a similar concern here, in an essay about how kids will be confronted by their permanent records, which reminds me of the “right to be forgotten” campaign.) But it runs counter to the cyborg idea — it assumes we will be able to draw a clear line between ourselves and the algorithms. If we can’t distinguish between these, it will be nonsensical to worry about which has access to more data about ourselves. It will be impossible to say whether you or the algorithms “knew” some piece of information about you first, particularly when the algorithms will be synthesizing data about us and then teaching it to us.

In that light, the standard that “no one should know more about you than you do” starts to seem clearly absurd. Outside entities are producing knowledge about us all the time in ways we can’t control. Other people are always producing knowledge about me, from their perspective and for their own purposes, that I can never access. They will always know “more about me” than I do by virtue of their having a point of view on the world that I can’t calculate and replicate.

Because we find it hard to assign a point of view to a machine, we perhaps think they can’t know more about us or have a perspective that isn’t fully controllable by someone, if not us. Croll is essentially arguing that we should have control over what knowledge a company’s machines produce about us. That assumes that their programmers can fully control their algorithms, which seems to be less the case the more sophisticated they become — the fact that the algorithms turn out results that no one can explain may be the defining point at which data becomes Big Data, as Mike Pepi explains here. And if the machines are just proxies for the people who program them, Croll’s “moral issue” still boils down to a fantasy of extreme atomization — the demand that my identity be entirely independent of other people, with no contingencies whatsoever.

The ability to impose your own self-concept on others is a matter of power; you can demand it, say, as a matter of customer service. This doesn’t change what those serving you know and think about you, but it allows you to suspend disbelief about it. Algorithms that serve us don’t allow for such suspension of disbelief, because they anticipate what service we might expect and put what they know about us into direct action. Algorithms can’t have opinions about us that they keep to themselves. They can’t help but reveal al all times that they “know more about us” — that is, they know us different from how we know ourselves.

Rather than worry about controlling who can produce information about us, it may be more important to worry about the conflation of data with self-knowledge. The radical empiricism epitomized by the Quantified Self movement is becoming more and more mainstream as tracking devices that attempt to codify us as data become more prevalent — and threaten to become mandatory for various social benefits like health insurance. Self-tracking suggests that consciousness is a useless guide to knowing the self, generating meaningless opinions about what is happening to the self while interfering with the body’s proper responses to its biofeedback. It’s only so much subjectivity. Consciousness should subordinate itself to the data, be guided more automatically by it.  And you need control of this data to control what you will think of yourself in response to it, and to control the “truth” about yourself.

Reducing self-knowledge to matters of data possession and retention like that seems to be the natural bias of a property-oriented society; as consciousness can’t be represented as a substance than someone can have more or less of, therefore it doesn’t count. But self-knowledge may not be a matter of having the most thorough archive of your deeds and the intentions behind them. It is not a quantity of memories, an amount of data. The self is not a terrain to which you are entitled to own the most detailed map. Self-knowledge is not a matter of reading your own permanent record. It is not an edit of our life’s footage.

A quantified basis for “self-knowledge” is bound up with the incentives for using social media and submitting to increased surveillance of various forms. If we accept that self-knowledge is akin to a permanent record, we will tolerate or even embrace Facebook’s keeping that record for us. Maybe we won’t even mind that we can’t actually delete anything from their servers.

As our would-be permanent recorders, social media sites are central to both data collection (they incite us to supply data as well as help organize what is collected across platforms into a single profile) and the use of data to implement social control (they serve algorithmically derived content and marketing while slotting us into ad hoc niches, and they encircle us in a panoptic space that conditions our behavior with the threat of observation). But for them to maintain their central place, we may have to be convinced to accept the algorithmic control they implement as a deeper form of self-knowledge.

But what if we use social media not for self-knowledge but for self-destruction? What if we use social media to complicate the idea that we could ever “know ourselves”? What if we use social media to make ourselves into something unknowable? Maybe we record the footage of our lives to define therein what the essence of our self isn’t. To the degree that identity is a prison, self-knowledge makes the cell’s walls. But self-knowledge could instead be an awareness of how to move beyond those walls.

Not everyone has the opportunity to cast identity aside any more than they have the ability to unilaterally assert self-knowledge as a form of control. We fall into the trap of trying to assert some sort of objectively “better” or more “accurate” identity that reflects our “true self,” which is only so much more data that can be used to control us and remold the identity that is assigned to us socially. The most luxurious and privileged condition may be one in which you get to experience yourself as endlessly surprising — a condition in which you hardly know yourself at all but have complete confidence that others know and respect you as they should.

Authentic sharing

023.tif

“Sharing economy,” of course, is a gratingly inappropriate terms to describe a business approach that entails precisely the opposite, that renders the social field an arena for microentrepreneurship and nothing else. Yet the vestiges of “sharing” rhetoric clings to such companies as Airbnb and a host of smaller startups that purport to build “trust” and “community” among strangers by getting them to be more efficient and render effective customer service to one another. What more could you ask of a friend?

By bringing a commercial ethos to bear on exchanges that were once outside the market, the civilizing process that is often attributed to the “bourgeois virtues” of capitalism — with successful economic exchange building the only form of social trust necessary — gets to spread itself over all possible human relationships. The only real community is a marketplace in which everyone has a fair shot to compete.

The freedom of anonymous commercial exchange amid a “community” of well-connected but essentially atomized strangers well-disciplined by the market to behave conventionally and sycophantically is not the sort of community the sharing companies tend to crow about in their advertising. The rhetoric of the sharing economy’s trade group, Peers, is instead saturated with testimonials of communal uplift and ethical invigoration. In an essay about the cult-like methods of sharing-economy indoctrination, Mike Bulajewski cites many, many examples of the companies’ blather about community and the ornamental techniques they encourage among users to sustain the illusion. (Fist-bump your driver! Neato!) He notes that “What’s crucial to realize is that proponents of “sharing” are reinventing our understanding of economic relations between individuals so that they no longer imply individualism, greed or self-interest” — i.e., the bourgeois virtues, which make for atomized “metropolitan” people whose freedom (such as it is) is protected in the form of anonymity and equal treatment in the marketplace. “Instead,” Bulajewski writes, “we’re led to believe that commerce conducted on their platforms is ultimately about generosity, helpfulness, community-building, and love.”

Is this rhetoric fooling anyone? Marketing professors Giana M. Eckhardt and Fleura Bardhi suggest that it is bad for their business. In an article for the Harvard Business Review they recount their research that found that consumers don’t care about “building community” through using services like Airbnb and Lyft; they actually just want cheaper services and less hassle. They want consumerist “freedom,” not ethical entanglements. The platforms are popular because they actually diminish social interaction while letting users take advantage of small-time service providers who are often in precarious conditions and have little bargaining leverage. You “trust” the sharing-platform brand while you exploit the random person offering a ride or an apartment (or whatever) without having to negotiate with them face to face.

When “sharing” is market-mediated — when a company is an intermediary between consumers who don’t know each other — it is no longer sharing at all. Rather, consumers are paying to access someone else’s goods or services for a particular period of time. It is an economic exchange, and consumers are after utilitarian, rather than social, value.

That seems almost self-evident. The sharing-economy companies are not a way to temper capitalism (and its tendency to generate selfish individualists); they just allow it to function more expediently. The sharing economy degrades “social value,” defined here as the interpersonal interactions that aren’t governed by market incentives and economistic rationality, in favor or expanding the “utilitarian value” of consumption efficiency, more stuff consumed by more individuals (generating more profit). Utilitarian value is impeded by the need to deal with other humans, who can be unpredictable or have irrational demands.

Eckhardt and Bardhi propose “access economy” as an alternative term to sharing economy. One might presume “access” refers to the way consumers can pay brokering companies for access to new pools of labor and rental opportunities. Think “shakedown economy” or “bribe economy.” Middlemen like Uber who (like an organized-crime racket) achieve scale and can aggressively bypass the law can put themselves in a prime position to collect tolls from people seeking necessary services and the workers who hope to provide them.

But Eckhardt and Bardhi want to use the term to differentiate renting from owning. People are content to buy access to goods rather than to acquire them as property. Viewing the sharing economy from that angle, though, you can almost see why some are beguiled by its communitarian rhetoric. The sharing economy’s labor practices are abhorrent, but we might overlook all that if we think instead of how it liberates us from being overinvested in the meaning of our stuff. Leaving behind consumerist identity presumably could open the space for identity based in “community” (though it would be more accurate to say an identity based on caste, and what services you render).

Renting is very bad for marketers (it’s not “best practices,” the marketing professors note), because people don’t invest any of their identity into brands they merely rent. They don’t commit to them, don’t risk their self-concept on them. “When consumers are able to access a wide variety of brands at any given moment, like driving a BMW one day and a Toyota Prius the next day, they don’t necessarily feel that one brand is more ‘them’ than another, and they do not connect to the brands in the same closely-binding, identity building fashion.” So what marketers want consumers to want is ownership, which puts their identity in play in a more high-stakes way and gives advertisers something to sink their teeth into. Whether or not consumers actually want to own so many things is a different question. Marketers must insist that they know what consumers want (that’s their rationale for their job); the benefits consumers supposedly reap according to marketers are actually just the ideological tenets of marketing.

This helps bring into focus what a true sharing economy — one that discouraged ownership while imposing reciprocal human interaction — might accomplish. Marketers approve of “brand communities” that let isolated people ”share identity building practices with like-minded others,” but little else. That is, in such communities they can “share” without sharing. They can “share” by buying products for themselves.

But with more widely distributed rental opportunities, identity anchored in what one owns can potentially be disrupted. As Eckhardt and Bardhi  write:

When consumers are able to access a wide variety of brands at any given moment, like driving a BMW one day and a Toyota Prius the next day, they don’t necessarily feel that one brand is more “them” than another, and they do not connect to the brands in the same closely-binding, identity building fashion. They would rather sample a variety of identities which they can discard when they want.

If not for the burden of ownership, then, consumers would conceivably try on and discard the identities implied by products without much thought or sense of risk. They would forgo the “brand community” for a more fluid sense of identity. Perhaps they would anchor their identity in something other than products while enjoying the chance to play around with personae, by borrowing and not owning the signifying resonances of products.

Perhaps that alternate anchor for the self could be precisely the sort of “social value” human interaction that exceeds the predictable, programmable exchanges dictated by the market, and its rational and predictable incentives. This is the sort of interaction that people call “authentic.” (Or we could do away with anchors for the self altogether and go postauthentic — have identity only in the process of “discarding” it.)

Companies like Lyft and Airbnb do nothing to facilitate that sort of interaction; indeed they thrive by doing the opposite. (Authenticity marketing, incidentally, does the same thing; it precludes the possibility of authenticity by co-opting it.) They subsume more types of interaction and exchange to market structures, which then they mask by handling all the money for the parties involved. This affords users the chance to pretend to themselves that the exchange has stemmed from some “meaningful” rather than debased and inauthentic commercial connection, all while keeping a safe distance from the other party.

Sharing companies use their advertising to build a sort of anti-brand-community brand community.  Both sharing companies and brand communities mediate social relations and make them seem less risky. Actual community is full of friction and unresolvable competing agendas; sharing apps’ main function is to eradicate friction and render all parties’ agenda uniform: let’s make a deal. They are popular because they do what brand communities do: They allow people to extract value from strangers without the hassle of having to dealing with them as more than amiable robots.

When sharing companies celebrate the idea of community, they mean brand community. And if they appropriate rhetoric about breaking down the attachment to owning goods as a means of signifying identity and inclusion, it’s certainly not because they care about abolishing personal property, or pride in it. It’s because they are trying to sell their brand as an alternative to the bother of actually having to come up with a real alternative to product-based personal identity. They just let us substitute apps and platforms in for the role material goods played. They cater to the same customer desire of being able to access “community” as a consumer good.

The perhaps ineluctable problem is that belonging to communities is hard. It is inefficient. It does not scale. It doesn’t respond predictably to incentives. It takes more work the more you feel you belong. It requires material sacrifice and compromise. It requires a faith in other people that exceeds their commercial reliability. It entails caring about people for no reason, with no promise of gain. In short, being a part of community is a total hassle but totally mandatory (like aging and dying), so that makes us susceptible to deceptive promises that claim to make it easy or avoidable, that claim to uniquely exempt us. That is the ruse of the “sharing economy”—the illusion it crates that everyone is willing to share with you, but all you have to do is download an app.

Meanwhile, the sharing economy’s vision of everyone entrepreneurializing every aspect of their lives promotes an identity grounded in the work one can manage to win for oneself, in the scheming and self-promoting posture of someone always begging for a job. If its vision of the economy comes true, no one would have the luxury to do little sharing-economy tasks on the side but would instead have to do them to survive. And there would be no safety net because there would be no political solidarity to generate it, and many of its functions will have been offloaded to sharing-economy platforms. The result would be less a community of equals exchanging favors than a Hobbesan war of all against all, with the sharing-company Leviathans furnishing the battlefield and washing their hands of the casualties.

A Man Alone

rod

Rod McKuen died a few days ago. Because I have spent a lot of time in thrift stores, I feel like I know him well, since that’s where lots of his poetry books (Listen to the Warm, Lonesome Cities, etc.) have ended up, alongside the works of kindred spirits Walter and Margaret Keane. His albums, sometimes featuring his singing but generally he just recites his poetry over light-orchestral music, can be found there too. I like “The Flower People“: “I like people with flowers. Because they are trying.”

Artists like McKuen and the Keanes, who achieved unprecedented levels of success with the mass-market audience in the 1960s while being derided by critics for peddling “sentimental” maudlin kitsch, fascinate me — probably a hangover from graduate school, when I spent a lot of time studying the 18th century vogue for “sensibility” novels, which were similarly saturated with ostentatious tears. McKuen has a lot in common with the 18th century “man of feeling” epitomized by the narrator of Sterne’s A Sentimental Journey, who travels around seeing suffering and  ”having feelings,” which prove his humanity and allow readers to experience their own humanity vicariously. McKuen let his audience accomplish something similar with his tales of urban love and loneliness and his wistful recollections of weather and whatnot.

Still, I wonder why the market for McKuen and the Keanes re-emerged just then, in the 1960s? What made reified desolation a sudden hot commodity? Did it have to do with changes in available media, or the general air of postwar prosperity? And what’s the relation between their success and their reputation? Why is that kind of critical contempt they received reserved for artists who commercialize sadness and feelings of loneliness and vulnerability? What did audiences want from their work, such that critics could seize upon it to mock it and make themselves and their readers feel superior to it all?

The New York Times obit of McKuen concludes with a quote from him in which he claims that success turned critical opinion against him:

“I only know this,” Mr. McKuen told The Chronicle in 2002. “Before the books were successful, whether it was Newsweek or Time or The Saturday Evening Post, the reviews were always raves.”

I wonder if McKuen thought that only his popularity kept him from earning the respect that, say, Leonard Cohen or Jacques Brel (whose songs McKuen translated into English-language hits) or maybe even Wordsworth and Whitman tend to get. But such counterfactuals seem beside the point, not only because critical opinion is fickle and ever-changing but because it is impossible to separate the “quality” of a work from the conditions surrounding its reception.

Participating in the phenomenon of McKuen’s popularity (or conspicuously refusing) became essentially what his work was about, beyond the melancholy remembrances about lost lovers and cities at dusk. You were either on board and willing to conform, willing to let McKuen be the way you defused potent and inescapable fears about decay, sadness, anonymity, and fading love along with millions of others and thereby mastered those feelings, put them in a safe place to be admired, or you were not on board, unwilling to conform, unwilling to admit those feelings could be collectively tamed but instead must be personal demons you never stop fighting alone, far more alone than any McKuen poem could ever testify to.

One might be tempted to champion McKuen as a populist who rendered the ordinary person’s feelings and aspirations in easily digested metaphors while the culture snobs sneered. But if you listen to a lot of his music or read through his books, you might end up with the sense that his point of view has more to do with snobs than ordinary people: He seems to travel a lot from glamorous seaside city to glamorous city, indulging in late-night bouts of boozy nostalgic melancholy with little fear of economic want, issuing patronizing advice about how to feel to readers or listeners or discarded lovers or total strangers, luxuriating in emotions as if they were badges of privilege rather than the afflictions he would otherwise have you believe. (Listen to “Earthquake,” for instance.) McKuen sounds like a humble-bragger whose medium is misery; his sadness makes him more important and individuated than less sensitive or self-regarding souls.

I wonder if, when McKuen was popular, critics felt threatened not by his work’s “sentimentality” but its familiarity, which they then labeled “vulgarity” to try to expunge it from their own sensibility.  I know that is how I feel when I listen to his music. It sounds smug to me because I’ve felt those smug feelings and romanticized them privately (lacking the courage or the chutzpah to try to cash in on them). I can’t hear his poems as straightforwardly earnest, like perhaps the millions of people who bought in could. I implicate myself in these works instead, in every self-satisfied line of self-deprecation and self-pity. I recognize someone who wants to feel different from everyone else but still wants them all to feel sorry for him.

McKuen went more or less underground of his own volition in the early 1980s, which perhaps could be seen as a kind of admission of guilt. His obituaries describe him in his reclusion as severely depressed, holed up in his California home with half a million records and CDs. It’s an emblematic tableau that stands as a warning. You can retreat to the mountain with your carefully curated collection of records that prompt you to have all those important feelings that you can’t bear to experience through or with other people, but that’s not going to let you understand what all those people felt when they bought a Rod McKuen record in the 1960s and maybe even played it once or twice.

Simple and Plain

Screen Shot 2015-01-08 at 4.16.41 PM

Today is Elvis Presley’s birthday. He would have been 80. Most people accept that he died in 1977, at the age of 42, which means I am older now than he ever was, a fact I have a hard time wrapping my head around.

I’m currently reading Careless Love, the second volume of Peter Guralnick’s biography of Elvis, and it is bringing me down. It’s about how fame was a collective punishment we administered to Elvis, which he would not survive. Fame allowed him to coast along when he should have been stretching himself; like a gifted child praised too much too soon, it made him incapable of coping with challenges. Fame allowed his manager, Colonel Parker, to construe Elvis’s talent as a cash machine. Parker encouraged in Elvis a zero-sum attitude toward his art, so that he demanded as much money he could get for output as superficial as they could make it, as if the shallowness implied savings, a better bargain from the forces who commercialized him. Fame transformed Elvis into a kind of CEO who inhabited his own body as if it were a factory, a capital stock, on which an enormous and ever-mutating staff relied upon for their livelihood. As a consequence, fame isolated him completely. His friends, no matter how much they loved and respected him, remained a paid entourage whom he could never completely believe actually loved him for real. “He constructed a shell to hide his aloneness, and it hardened on his back.” Guralnick writes in the introduction.  ”I know of no sadder story.”

I first got into Elvis after stopping at Graceland, his home in Memphis, during my first road trip across the U.S., in 1990. I knew very little about him, just what you sort of absorbed by osmosis from the culture. Elvis impersonators were probably more salient than Elvis himself at that point. My grandmother, I remember, had some of his later records: Moody Blue; Aloha From Hawaii via Satellite. I wanted to stop at Graceland because I thought it would be campy fun; I wanted to re-create the scene in Spinal Tap when they experience “too much fucking perspective” at Elvis’s graveside.

But Graceland was surprisingly somber, straddling the line between pathos and bathos, never letting me take comfort in either territory. It didn’t seem right to laugh when confronted with the meagerness of the vision of someone who could have had anything but chose Naugahyde, thick shag rugs, and equipping rooms with dueling TV sets. And it was genuinely humbling to recognize the desperation in it all, the dawning sense that Elvis had nowhere to turn for fulfillment and had none of the excuses we have (lack of time and resources, lack of talent) to avoid confronting inescapable dissatisfaction head on.

In one of the stores in the plaza of gift shops across the street from Graceland, I bought a TCB baseball hat and cassette of Elvis’s first RCA album, the one whose design the Clash mimicked for London Calling.

elivis

Every time it was my turn to drive, I put the tape on; listening to “Blue Moon” while driving through the vacuous darkness of Oklahoma was the first time I took Elvis seriously as a performer, the first time I heard something other than my received ideas about him. Then, like a lot of music snobs, I got into the Sun Sessions and the other 1950s stuff and declared the rest of his career irrelevant, without really knowing anything about it. In recent years, I have overcorrected for that and listened mainly to “fat Elvis” — the music he made after the 1968 Comeback. I’m amazed by moments like this, a 1970 performance of “Make the World Go Away.” Wearing a ludicrous white high-collar jumpsuit with a mauve crypto-karate belt around his waist, he mumbles a bit, tells a lame joke about Roy Acuff that nobody gets, saunters over the side of the stage to drink a glass of water while the band starts the saccharine melody, then out of nowhere hits you with the first lines, his voice blasting out, drawing from a reserve of power that quickly dissipates. Then he skulks around the stage, visibly antsy, as if trying to evade the obvious relevance of the song’s lyrics to his sad, overburdened life.

I never paid any attention to 1960s Elvis, but now, reading through Guralnick’s dreary, repetitive accounts of Elvis’s month-to-month life in the 1960s, when he flew back and forth mainly between Memphis, Los Angeles, and Las Vegas as he accommodated a relentless film-production schedule — he made 27 movies from 1960 to 1969 — fills me with an urgent desire to somehow redeem this lost era of his career, to study it and find the obscured genius in it, to rescue it through some clever and counterintuitive readings of his films or the dubious songs he recorded for them. I just don’t want to believe that Elvis wasted the decade; I don’t want to accept that talent can indeed be squandered, that instead it finds perverse ways to express itself even in the grimmest of circumstances. But this was an era when he was cutting material like “No Room to Rhumba in a Sports Car” (Fun in Acapulco), “Yoga Is as Yoga Does” (Easy Come, Easy Go), “Do the Clam” (Girl Happy), ”Queenie Wahine’s Papaya” (Paradise, Hawaiian Style), and “Song of the Shrimp” (Girls! Girls! Girls!). I’m not sure it’s all that helpful to pursue a subversive reading of Clambake. What there is to see in Elvis’s movies is doggedly on the surface; as Guralnick makes clear, these films were made by design to defy the possibility of finding depth in them.

At best, a case can be made for appreciating Elvis’s sheer professionalism in this era, his refusal to sneer publicly at material far beneath him. Sure, he was on loads of pills, and the epic-scale malignant narcissism of his offscreen behavior was establishing the template for all the coddled superstars to come. But he wasn’t a phony. If he was cynical, it was a hypercynicism that consisted of an unflaggingly dedicated passion for going through the motions. Guralnick describes Elvis in some of these films as being little more than movable scenery, a cardboard cutout, but he is a committed cardboard cutout. A bright empty shell with a desultory name and job description (usually race-car driver) attached, Elvis wanders through an endless series of unconvincing backdrops, reflecting back to us the cannibalizing effects of fame, inviting us try to eat the wrapper of the candy we already consumed.

Tim Burton’s “Big Eyes”

FB series9Tim Burton’s Big Eyes makes a strong case that Walter Keane was a first-order marketing genius and his wife Margaret, whose paintings he appropriated and promoted as if they were his own, used his marketing talents up until the moment she could safely dispense with them. Given that Margaret Keane apparently cooperated with the making of Big Eyes (she painted Burton’s then wife Lisa Marie in 2000, and I think she appears at the end of the film alongside Amy Adams, who plays her), this seems sort of surprising. On the surface, the movie tells the story of her artistic reputation being rightly restored, but that surface is easily punctured with a moment’s consideration of the various counternarratives woven into the script. Then we are dealing with a film about a visionary who turned his wife’s hackneyed outsider art into one of the most popular emblems of an era and who has since been neglected and forgotten, despite inventing art-market meta-strategies that have since become ubiquitous.  The movie seems to persecute Walter because the filmmakers believed it was the only way they could get us to pay enough attention to him to redeem him.

I went in to see Big Eyes expecting a cross between Burton’s earlier Ed Wood and Camille Claudel, the biopic about the sculptor whose career was overshadowed by her romantic relationship with Rodin, whom she accused of stealing her ideas. That is, I thought it would be about how female artists have struggled for adequate recognition, only played out in the register of kitsch pop art. I figured Burton would try to capture something of whatever zany, intense passion drove Margaret Keane to make her “big eye” paintings, much as he had captured Ed Wood’s intensity in the earlier film. We would see a case made for the legitimacy of Margaret’s work, which is now often seen as campy refuse, maudlin junk you might buy as a joke at a thrift store, at the same level as Love Is… or Rod McKuen poetry books.

But Burton doesn’t make much of an effort to vindicate Margaret on the level of her art. No explanation is suggested for why she paints or why audiences connected to her work. Rather than giving the impression that no explanation is necessary, that its quality speaks for itself, this omission has the effect of  emphasizing the film’s suggestion that the significance of her painting rests with the innovative job Walter performed in getting people to pay attention to it, operating outside the parameters of the established art world. Meanwhile, Margaret’s genius remains elusive, as unseeable as it was when Walter effaced it. Margaret is a bit of a nonentity in the film, locked in a studio smoking cigarettes and grinding out paintings at her husband’s command, much as if she were one of Warhol’s Factory minions, while Walter is shown as a dynamic, irresistible figure who comes up with all the ideas for getting her work to make a stamp on the world. In fact, in the script, Burton likens Walter to Warhol multiple times and the movie even opens with a Warhol quote (from this 1965 Life article) in which he praises Walter Keane: “I think what Keane has done is just terrific. It has to be good. If it were bad, so many people wouldn’t like it.”

Since this quote came before seeing anything of the story, I took it as Burton’s attempt to use a name-brand artist’s imprimatur to validate Margaret’s work in advance for movie audiences who possibly wouldn’t read any irony in Warhol’s statement — Burton could laugh at his audiences and show his contempt for their expectations by rotely fulfilling them, as he had with Mars Attacks and the Planet of the Apes remake. But (as usual) I was being too cynical. Afterward, I started to think Burton was in earnest in choosing this quote, and that Big Eyes is instead subverting the expectations liberal audiences might have of it being a stock feminist redemption story. It mocks those audiences, mocks the indulgence involved in using depictions of the past to let ourselves believe we have now somehow transcended the bad old attitudes of sexism. The somewhat smug and self-congratulatory view that “Nowadays we would accept Margaret Keane as a real artist and see through Walter Keane’s tricks” is complicated by the fact that Margaret’s art is kitsch and that Walter’s tricks come not at the expense of art but are instead the sorts of things that nowadays chiefly constitute it.

Margaret is depicted as the victim of Walter’s exploitation, but that view is too simplistic for the film that ostensibly conveys it. It makes Margaret passive, intrinsically helpless, easily manipulated. So simultaneously, Big Eyes gives a convincing portrait not of Margaret’s agency, as you might expect, but of Walter as a passionate, misunderstood genius, a Warhol-level artist working within commercialism as a genre, doing art marketing as art itself with the flimsiest of raw materials and executing a conceptual performance piece about identity, appropriation, cliches, and myths about creativity’s sources that spanned a decade. When the script has Walter claim having Walter claim that he invented Pop Art and out-Warholed Warhol with his aggressive marketing strategies, we can read it “straight” within Margaret’s redemption story as a sign of Walter’s rampant egomania. But the film actually makes a solid case for that being plausible, stressing how Keane was able to bring art into the supermarket before Warhol brought the supermarket into art.

Similarly, when Margaret discovers that the Parisian street scenes Walter claimed were his own while wooing her were actually painted by someone else and shipped to him from France, she is shocked, and we are seemingly supposed to share in this shock and feel appalled. But it makes as much sense to want to applaud his audacity and ingenuity, his apparent ability to assemble and assume the identity of an artist without possessing any traditional craft skills at all. He’s sort of the ur-postinternet artist.

All of Big Eyes is shot as if the material has been viewed naively through the child-like big eyes of one of Margaret’s subjects, a perspective from which Walter’s acts just seem selfish and insane. But Burton is careful to allow viewers to regard the action from a more sophisticated perspective, which reads between the lines of what is shown and looks beyond the emotional valences of the surface redemption story being told. Margaret’s character always acknowledges Walter’s marketing acumen in the midst of detailing his misdeeds, and she never explains why she helped Walter perpetrate his fraud, other than to say, “He dominated me.” From what Burton shows and has Margaret say, this domination is less a matter of intimidation than charm. As awful as his behavior might have been in reality, Walter is little more than a cartoon villain in the film’s melodramatic domestic scenes; the misdeeds Burton depicts are Walter’s getting drunk and belligerently accusing one of Margaret’s friends of snobbery for rejecting representational art, and his flicking matches at Margaret and her daughter when he is disappointed about her work’s reception.

Of course, Walter’s primary crime is making Margaret keep her talent a secret (an open secret, apparently) — “from her own daughter!” even. He capitalizes on a sexist culture to take credit for Margaret’s ability, and then uses the specter of that sexist culture to control her, while more fully enjoying the fruits of what her ability brought them — the fame, the recognition, the celebrity hobnobbing, and so on. But Big Eyes also makes a point of undermining that perspective to a degree, making it clear that Margaret (in part because of that same sexist culture) never would have had the gumption to make a career out of painting without Walter’s support, and certainly she wouldn’t have been able to follow through with all the self-promotion necessary to sustain an art career and allow it to thrive. We are told that she didn’t want the spotlight; at the same time we are supposed to see her being denied the spotlight as part of her victimization. Walter helped created conditions in which Margaret could paint as much as he exploited the inequities of those conditions. And Margaret triumphed in ways that go far beyond the limited accomplishment of earnest “self-expression.”

During the trial scene, which is supposed to be Margaret’s ultimate vindication, one instead gets a sense through her testimony, and Walter’s outlandish performance as his own lawyer, that he will stop at nothing to put across his vision of the world and himself, despite not having any talent with traditional materials of representation. Doesn’t that make him the greater artist, the film seems to suggest, that he can use other people as his medium? All Margaret can apparently do is the parlor trick of making a big-eye painting in an hour in the courtroom. Whereas Walter could get Life magazine to interview him and tell his story, he could contrive elaborate background for how he suddenly came to paint waifs and kittens, and he could get his wife to willingly make all of his work for him and let him sign it as his own.

It is hard to walk away from Big Eyes without wondering just how much Margaret and Walter collaborated on the character of “Keane,” the artist who made compelling kitsch, and it’s hard not to feel sorry for him when before the ending credits we are shown a picture of the real Walter Keane, with text explaining how he died broke and penniless while he continued to insist on his own artistic genius. I wondered if in working with Burton, Margaret wasn’t still covertly collaborating with Walter, muddying the waters around their life’s work and letting some ambiguity flourish there. This impression, more than anything depicted explicitly in the film, gave me the strongest sense of Margaret’s character, beyond cliches of resiliency and self-actualization.