twitter
facebook twitter tumblr newsletter
 

The Viral Virus

viral-social

Content mills turn out listicles and articles inviting us to assess our mental condition — and share it. When the mechanisms of social media tempt us to self-diagnose for attention, how can we tell when we are really afflicted?


AT its core, social media is a public pinboard of self-expression, a set of arenas where anything we post inevitably doubles as a signal of our identity. Sometimes this process is indirect — a link to a New York Times piece about Syrian refugees subtly informs people about your commitment to be informed. Other times it is explicit, as when you link to “19 Things Only Cat Owners Know to Be True” to inform everyone that you own a cat.

The more explicit mode of self-expression has led to the proliferation of identity-bait articles, cooked up not to be especially informative but to be shared as condensed bits of the whole you. These a la carte advertisements for oneself are meant to be specific enough to say something about a particular person but general enough to go viral: “10 Things Only Left-Handed People Will Understand,” for example. The nature of the humble-braggy “grievances” vary from topic to topic — a listicle about having large breasts might offer, “Bra shopping is hard, but at least you can fill out a cocktail dress!”; one about with insomnia will assert, “You’re tired all the time, but you can get so much work done!” — but each builds toward the same formulaic takeaway: “I’m X and I’m great!”

With all the opportunities social media offers to share, we are invited to proclaim as many of those Xs as we can. The potential dimensions of our personality are compounding exponentially as a result, with each offering us an ersatz community to belong to and take pride in. We’re using pre-packaged, shareable content to articulate everything from allergies and pet peeves to nuanced distinctions within and between categories like race, gender, sexuality, and mental health.

It’s here that listicle publishers cease to be merely viral fluff factories. The rise of mental-health listicles began with the Great Introversion Declaration of the summer of 2013, after Buzzfeed’s “31 Unmistakable Signs You’re an Introvert” went viral, triggering a flurry of copycat content. Seeing how popular it was to self-diagnose and publicize one’s introversion, the Huffington Post asked users to similarly assess their anxiety levels, publishing “7 Easy Hacks To Help You Deal With Anxiety,” “You’re Just 5 Minutes Away From Being Anxiety Free” and “Will Anxious Parents Have Anxious Kids?” Around the same time, ABC News syndicated a piece titled “12 Signs You May Have an Anxiety Disorder” and Buzzfeed published “26 Problems Only Anxious People Will Understand” (which was followed less than a year later by “24 Problems Only Anxious People Will Understand”).

Judging by how often such pieces are shared, we’re all surrounded by anxiety-ridden introverts or convinced we’re anxiety-ridden introverts ourselves. But most of this content is only loosely tied to what it means to have anxiety or be an introvert.

When people use sharable content to affiliate themselves with mental-health conditions, it can help expand their visibility in a society that tends to suppress them. This can be empowering. It breaks down stigmas and opens lines of communication. But it can also be trivializing, particularly given the incentives social media provide users to spuriously lay claim to illnesses that only medical professionals are qualified to diagnose.

Because they are optimized for identity signaling rather than for being informative, these listicles tend to romanticize mental health conditions — anxiety is really just a pesky side effect of intelligence; introverts are people with rich interior lives who just hate surprise parties. It applies an attractive sheen to potentially concerning behaviors, boiling them down to overthinking or occasional antisociality. It broadens the definition of these conditions to allow anyone to claim them in exchange for sympathy or applause, increasing the chances they will get wider distribution in social media. 

It’s clear why Buzzfeed and Huffington Post make this content: it drives clicks. But why do people share it? Self-diagnosis in the form of engagement with viral content becomes a successful personal-branding strategy. It is participatory catharsis (“Anxiety is so annoying, amirite?”) as well as a source of attention and praise (“Go me, I’m taking care of myself!”)

Identity-bait listicles invite readers to indulge a kind of cyberchondria for attention, a like-driven version of Munchausen syndrome. They propose a bare-minimum, pop-psychology-inflected definition of mental-health conditions, making them lowest-common-denominator enough to allow the broadest base of readers to identity with it and possibly share it. While this may raise awareness of a condition, it also raises the level of confusion about it.

There is a fine line between the breakdown of stigma and the distribution of toxic misinformation. Finding outlets for solidarity or publicly performing self-care can be therapeutic. But sharing a listicle is not a sufficient replacement for therapy. In some cases, it can make the truly afflicted seem like unserious attention-seekers. Just as the gluten-free fad can belie the seriousness of celiac disease, the dilution of what counts as anxiety can make it seem like the clinically anxious are just stressed out.

Lots of people worry, but generalized anxiety disorder is a specific condition with particular diagnostic criteria. More likely than not, someone actually suffering from anxiety will not be relieved to see their stressors catalogued as a series of bullet-pointed quips. At worst, they may be discouraged from seeking help if they are led to believe their struggles are something ordinary that everyone suffers from. How can you know for sure whether you have anxiety after probing the possibility online?

Pharmaceutical companies certainly hope to persuade you. Self-diagnosis on sites like WebMD can drive drug-company profits, and the site is well supported by the industry’s ad dollars, as Vox recently reported. Eli Lilly once went so far as to rig a WebMD quiz on depression to suggest that anyone who took it was at risk of major depression. The wide net cast by personality-based listicle content works similarly, generating undue concern in order to engage the widest possible audience.

Clickbait listicles on mental health contribute to a fun-house mirror of self-expression that sets us all at an introspective disadvantage. Reading up on conditions can prompt us to imagine that we are actually suffering from them: Who among us hasn’t become convinced a cough was a sign of a malignant lung tumor after searching WebMD? With mental illnesses, which may lack measurable or disprovable physiological symptoms, the access to diagnostic information online sets up a torturous feedback loop: the search for the source of our worries inevitably incites additional worries. Hypochondria, after all, means “illness anxiety,” and no one benefits from a world in which everyone is anxious about having anxiety.

You can be stressed about achieving all of your life goals and not have generalized anxiety disorder. You can enjoy watching Netflix alone and still be a little bit of an extrovert. It’s complex! You’re complex. But viral social media doesn’t thrive on such complexities. The more we try to capture ourselves in the confines of shareable parameters, the further away we get from understanding who we are.






Subscribe
 

The Chaparral Insurgents of South Texas


texas-social

A new exhibit cops to state-sanctioned murder, but not vulnerability

THE first thing I saw as I reached the top floor of the Bullock State History Museum in Austin, Texas was the large sepia portrait of two swashbuckling Texas Rangers on horses, the taut rope of their lassos converging down toward something out of frame. The unseen complete portrait features the mangled corpses of Abraham Salinas, Eusebio Hernández, and Juan Tobar, three Tejanos in South Texas, at the ends of the Rangers’ ropes. “Postcards depicting violence against minorities were common novelties during the early 20th century,” a small placard next to the photo notes.

Between 1910 and 1920, thousands of Tejanos were murdered in the hot, dry borderlands by Texas law enforcement and white vigilantes. My family’s history is tied up in genocide: Relatives on my father’s side were swindled out of most of their ranch land by the Kleberg family, a longtime ally of the Texas Rangers that now owns a million acres in South Texas (known as the King Ranch). Hundreds had their land seized by Anglos under the protection of the Rangers, who eventually acted as a death squad to smash an armed Tejano resistance to oppressive white rule. The state of Texas has largely purged these events from public history records, and the exhibit at the Bullock Museum, Life and Death on the Border 1910-1920, which was on display from January 23 through April 3, was an attempted reckoning.

Continue Reading
Subscribe
 

Dark Pools

miranda-social

Narratives of financial complexity obscure how capitalist realisms are made—and might be unmade

I own a map that has been displayed, variously, in my living room, beside my writing chair, and above my bed, where it still hangs today. The map depicts the global shadow-banking system—the blanket term covering any unregulated activity that creates credit—and is all boxes: 350 tiny rectangles representing different financial institutions and instruments sorted into pairs, color-coded with combinations of 15 bright stripes for various forms of credit. For every self-evident label, there are four that aren’t, and if an inquiring layperson managed to work their way through the map looking up every foreign term they hit (MTN: medium-term note) there is still the problem of fitting it all into the larger spatial scheme of big (also) color-coded blocks, bracketing marginalia, and solid and dashed lines tracking whatever sort of relationships they track. I’ve never tried.

The map was published by the Federal Reserve following the 2008 crash, and though the image was made publicly available, it is intended for experts. Some mainstream-media outlets paid attention anyway; the Wall Street Journal, in one such instance, published a blog post titled “A Map of Our Ridiculously Convoluted ‘Shadow Banking’ System.” “Oh,” the post deadpans for an opener, “So that’s why our financial system almost collapsed.”

The post goes on to attempt a partial explanation, but that first line is telling. It has the ring of a familiar joke, one that practically writes itself. For literary critic Leigh Claire La Berge, it’s part of the abstraction side of popular financial-media discourse that tends to unfold through the twin poles of scandals and abstractions. In her book of the same name, La Berge argues that during the 1980s, finance became the discursive metonym for the economy at large, and a rhetoric of abstract complexity became a favorite method for talking about finance. “Is capital, or life, more abstract than it was 30 years ago?” she asks. In some ways, it doesn’t matter. The rhetoric precludes the question: “Abstraction, by its very nature, isn’t quantifiable.”

La Berge argues further that finance is uniquely constructed by its popular representations—through the whole array of ways finance gets seen, from market analysis in the Wall Street Journal to films like Oliver Stone’s Wall Street to so-you-want-to-be-me CEO autobiographies like Donald Trump’s Art of the Deal. “Representation,” she writes, “constitutes the value [finance] is supposedly representing.” Finance isn’t just shaped by narration but requires it for substantiation. This is what La Berge, borrowing from and building on Mark Fisher, calls capitalist realism: the chicken-or-egg manner in which finance capital and new cultural forms help one another emerge.

As that process unfolds, the slippage between the two gets dizzying. Just days before Black Monday in 1987, for example—at the time the largest single-day stock market drop in Dow Jones history and soon after a central symbol of the new era of finance—Tom Wolfe’s Bonfire of the Vanities was released; just days after Black Monday, La Berge notes, major papers looked to the novel to lend narrative to the crash. That year the Wall Street Journal touted the growing market for books on finance, and Wolfe cashed in with magazine thinkpieces calling for a realism that could capture the newly complex world of finance. Wolfe considered himself the vanguard of this realism. He had, after all, written Bonfire with the help of informants at Salomon Brothers, a top ’80s investment bank. Meanwhile, the press filtered evolving new realisms through the masculinist language of killers, cannibals, and predators that traders were proudly using to describe themselves, and the incoherent stream-of-consciousness style and bland name-dropping of CEO-penned books simultaneously explained and obscured what, precisely, constituted an insider’s experience of Wall Street.

For all the developing talk of an unfathomable Wall Street, though, financialization was from the start intimately embedded in ground-level economic experience. La Berge points in particular to the way it was enabled by the advent of personal banking. After the gold standard was lifted, the resulting flood of currency presented financiers with a problem: where to go with all of it. When other strategies proved not lucrative enough, they turned to personal banking, pushing consumer bank and credit cards and the development of new types of savings, checking, and retirement accounts. “Third world loans weren’t going to take [the bank] where [its leadership] wanted it to go, nor would commercial lending,” La Berge quotes the business journalist Joseph Nocera, “only the consumer could take [it] there.”

One of the central images that attended this turn, according to La Berge, was the ATM. As they were rolled out around New York, the machines showed up in a series of news stories, most of which reported on the various ways they confused or worried people. In the words of one bank manager, “people are wondering where the bank is.” There’s a not-often-mentioned ATM at the center of White Noise that just might be the proxy-narrator for the entire book. And a whole host of them populate American Psycho, drawing the historical connection elided in other texts—the direct one between high finance and personal banking—with a calculus that was pretty simple to grasp. The trader Patrick Bateman visits them obsessively, often for money he doesn’t need, an activity he likes to follow by randomly killing someone.

 

I met Cassie Thornton after getting recruited into an art project of hers, a piece in banks called Physical Audit. Physical audits were a series of financial experiments conducted at banks around New York. In one, auditors ran fingers and hands over bank surfaces searching for dirt; in another, people moved as one body while carrying out ATM transactions; in others, people pet a dog named Truman and then the walls, faked blindness while being introduced to the space à la Helen Keller, and opened accounts with as many names on them as possible. Whatever the outward metaphoric resonances, the inquiry was most interestingly about feeling bodily discipline—how our bodies did and didn’t comfortably move, what our eyes did and didn’t habitually see.

The piece was also, intentionally or not, about very particular kinds of bodies. Not everyone could have run such a Physical Audit, or at least not to the same effect. As a group, we were more white than not, middle class-ish, almost all young, all able-bodied. Most of us looked like “artists.” We were just the right sort of visible to be left alone in banks to play.

In her book Debt to Society, cultural theorist Miranda Joseph writes about the ways people are constructed through accounting practices, broadly understood—not just literal banking but related machinations in criminal law, popular discourses about responsibility and trustworthiness, and ways of valuing knowledge. It follows that people are constructed differentially through accounting practices, according to race and class and gender and geography and family structures. Blackness is a central referent, and Joseph spends a chunk of the book surveying the ways it has been historically constituted in especially close concert with narratives of indebtedness, untrustworthiness, and, crucial to the era of financialization, irresponsibility. She says gendered norms matter, too; specters of the shopaholic and the nervous, tight-fisted saver serve as negative frames for correct credit behavior. If personal banking is a key place high finance makes itself seen in everyday life, it is also an object that must itself be teased out, its systems of suppositions, points of access, and manifestations in specific bodies brought into focus. To that end, Joseph calls for counter-accounting practices that pay attention to the nuance of people’s different lived realities of finance.

But how that work should look isn’t necessarily obvious. I got the chance to interview Joseph and, while speaking off-handedly about student debt, complained about 18-year-olds getting stuck with loans they didn’t fully understand. She pointed out how that logic could be extended—to people of certain races, in “bad” neighborhoods, at nursing homes and assumed doddering—and mentioned a whole body of scholarship that documents the ways people understand the risks of taking on “predatory” credit (here the traders’ rhetoric survives) far better than they’re often said to. “We should be careful about buying too completely into the idea that all we need is financial education,” she said. And not just because doing so can get condescending fast, but because it erases what people already know about how finance makes their lives. The counter-accounting trick, it would seem, is to both be critical about what needs to be known and maximize what already is.

Michael Lewis’s Flash Boys might appear a strange place to draw inspiration. Lewis is a prime purveyor of the popular explainer-of-finance genre, and Flash Boys, which reports on exploitative high-frequency-trading practices, inevitably mines the rhetorical status quo that precedes it. Yet there are useful cracks to that facade. If at times the book presents finance as incomprehensible, it also presents it as opaque, and the two tend to be linked, if only implicitly. The opacity is a relational one. A quant doesn’t know what his algorithm does in the world because his department is purposely isolated from others in the firm. Mutual-fund managers find stock prices rising mid-trade because hidden advantages are doled out to high-frequency traders. Big investment banks open unregulated private exchanges called dark pools not (as the official logic goes) to protect clients but to protect their own profits from more agile HFT competitors and gain, as a bonus, a screen from behind which to better fleece clients. On the whole, the financial world depicted in Flash Boys seems impenetrable less because it is fundamentally too complex to grasp than because it is so systematically full of obfuscations.

Secretive financial wrongdoing has long been a theme in popular representations of finance, to be sure. But, as La Berge argues, it’s often narrated in the language of criminality—the scandal side of scandals and abstractions—which ultimately frames exploitative behavior as the violent exception to a system left otherwise unexamined. Flash Boys isn’t overly interested in fingering particular culprits, and, in a book practically destined to bestsellerhood, that disinterest proved ­anxiety-producing in financial quarters.

Consider a CNBC interview on the day of its release between Lewis, a Flash Boys informant and trader named Brad Katsuyama, and William O’Brien, the then-president of a private exchange called BATS that was implicated in some of the book’s worst allegations. The interviewers spent substantial energy trying to establish whether Lewis and Katsuyama thought the stock market was “rigged,” and O’Brien seemed fixated on dispelling the systemic implications of that notion. “Shame on you both for falsely accusing literally thousands of workers,” he blustered at the start of the interview, and only grew more aggressive from there. The interview soon went viral, as did Flash Boys as an explanatory touchstone. In one incestuously ­capitalist-realist strand of that process, news outlets from Bloomberg to CNN mentioned the book in reports on an investigation into HFT firms by New York attorney general Eric Schneiderman, who himself characterized the effort as an attempt to understand and regulate “Insider Trading 2.0,” a concept he introduced with reference to Oliver Stone’s Wall Street. Insider Trading 2.0 “isn’t about some Gordon Gekko–like characters gobbling up companies using information about those companies that no one else has,” he said. “In some ways, it’s more insidious.”   

The Wall Street truism on HFT is that it provides markets with necessary liquidity; Flash Boys tells the story of a group of financial workers trying to produce a counter-account to that line. Katsuyama, whose inquiry into HFT drives the book, began to ask questions when he started seeing his stock prices rise mid-trade every time he tried to execute a big order as a trading manager at the Royal Bank of Canada. When no one could explain it to him, he convinced his bosses to earmark $10,000 a day to lose testing hypotheses about what was going on. He assembled a team of people with complementary expertise—hard- and software, electronic trading ­strategy—and began conducting experiments, most of which bombed, because the guesses about the market on which they were based were wrong.

There’s no simple escape hatch from the ways we’re financialized, Joseph argues. For her, the task is not to find new and perfect modes of accounting but to rework available ones toward more just ends. Flash Boys says little about how the exploitative practices it details affect people in their daily lives, and its prescription for curbing those practices is likewise vague. That doesn’t mean the book doesn’t contain tools for someone else to raid. After Physical Audit was done, Cassie made a short video with some footage she’d taken. The video consists entirely of shots of participants’ auditing hands set to pulsing Muzak. It is all repetitive action, closeups of bank surfaces, and looping insinuations of deep affect and desire. It looks like the beginnings of a data set. That data doesn’t represent everyone’s capitalist realism, but it represents Cassie’s, and mine, and if that distinction can be preserved, her project suggests ways to explore how people iterate and are iterated by finance. For those without $10,000 a day to spare, the methodology might begin with the body—by watching its position, working to feel how finance invisibly guides its hand.

 

There is likely something to be made of the fact that both shadow banking and dark pools are industry-accepted terms; I haven’t fully sussed it out but would guess it had to do with a counter-account financial institutions could call on in response to growing calls for transparency. Don’t worry, it’s probably already reassuring us, opacity is an important part of a healthy market system.

In late 2014, Fortune reported that the Federal Reserve was updating their map. This one comes as an estimate puts shadow banking at 80 percent of the U.S. banking sector (as compared to 20 percent globally) and will attempt to record every major institution in the U.S. shadow banking system. “It is the most complicated map you have ever seen,” said Stanley Fischer, the Fed’s vice chairman. Maybe. For most people there’s also the question of being able to draw its counter-map: that of the shadow banked. 

Subscribe
 

Discipline and Pleasure

willie-social

Is addiction a deeper form of distraction or a desperate escape from it? What the video game Dota 2 can teach us

DEPRESSION and addiction are hard to distinguish when they happen simultaneously; they seem to overlap and reinforce each other, becoming an endless cycle. I know I’d be less depressed if I stopped playing Dota 2, but I don’t know how I’ll find the willpower to stop playing Dota 2 as long as I’m this depressed.

My depression didn’t start when I downloaded the game. And 500 hours of play later—an amount that might seem absurd to the uninitiated but which marks me as a novice in the Dota 2 “community”—I know it won’t disappear when I stop playing. Nevertheless, these days Dota feels like the specific block to my ability to live a happy life. It’s also the only thing I want to do. Even as I type these sentences I realize that my body is tilting left, literally straining toward the computer in the other room on which I play.

User reviews of Dota 2 on the Steam marketplace, where one gets the game, show my experience to be typical. Rather than rate the game from, say, one to ten, Steam has reviewers choose to either recommend (thumbs up) or not recommend (thumbs down) a game as part of their review. The reviews also automatically and handily include the number of hours the user has played. Phux, with 2,734 hours in Dota 2, gives it the thumbs up with a four-word review: “Regrets, so many regrets.” Inkubeytor, with 4,412 hours in game, does not recommend it, writing only “Suffering.” A user named “happy new year” (7,885 hours) recommended Dota 2 on July 27 with “HELP ME,” while Fierce (1,550 hours) does not recommend it: “PLEASE GIVE ME MY LIFE BACK.” About two thirds of Dota’s reviews in the marketplace are in this vein, if not all so pithy.

Though Dota 2 is entirely built around multiplayer engagement and teamwork, the first genuine feeling of social togetherness and empathy I ever got from the game was when I read these reviews/cries for help. I also only read these reviews because I was stuck in the Steam marketplace waiting for Dota 2 to redownload, after I had uninstalled it 20 hours earlier in a hopeless attempt to be free.

 

FOR those blissfully unaware, Dota 2 is a multiplayer online battle arena (MOBA), which is a strategy-game subgenre somewhere between MMORPGs (massively multiplayer online role-playing games like World of Warcraft) and RTS (real-time strategy games such as Starcraft II). MOBAs take multiplayer gameplay, vast player populations, and RPG-style leveling up from MMORPGs and join them with the resource management and direct head-to head competition of an RTS. Though Dota 2 is not the most popular MOBA—that would be League of Legends, with a monthly player base of 67 million, or 1% of the world’s population—it still boasts 12 million unique players a month. It is also one of the most important games in making e-sports big business: Dota 2 has the best-funded tournament in professional video gaming, and its most recent annual championship, the 2015 International, featured a prize pool of over $16 million.

Each game of Dota lasts, on average, around 40 minutes and comprises 10 players total, five on each side, who attempt to storm the other team’s base and destroy their central structure, called the Ancient (hence Dota: Defense of the Ancients). Every player controls a separate “hero” chosen from a field of 111 available heroes, all with different strengths, weaknesses, and tactics, different strategic modes and peak power timings. Each of these factors is also influenced, combined with, or countered by those of other heroes, both those on your team and the opponents’. Just choosing heroes, which you do simultaneously with the opposing team, is a huge component of the game and determines both your win conditions and how each player will try to play.

In every game, all heroes start with zero experience and 625 gold, carrying nothing over from previous ­matches. Players gain gold and experience points through killing opposing heroes or the other team’s non-­player-controlled monsters, called creeps—little goblins that spawn constantly for both teams. Players use this gold and experience to buy items from among the 142 available and to level up spells and abilities. The map is always the same, the creeps always spawn in the same pattern, the available items are constant, and in general the game setup is static. The field, rules, and goals are always the same, which makes Dota and other MOBAs similar to traditional sports. But that basic stasis is also a key part of the game’s addictiveness: Every match is simultaneously totally identical and completely different.

With so many possible combinations of heroes, items, and scenarios, most of them coming to a head in split-second confrontations reliant on intense mouse and keyboard speed, there is an almost infinite learning curve (not to mention the fact that the game is regularly patched, with the developers changing the nature of abilities, items, and heroes). An entire game can be won or lost by the particular order in which one of the 10 players decides to purchase their items, or by one player being a few steps out of position and getting caught out before a crucial fight. The game is hard—really hard—and the most famous introductory guide to it is called “Welcome to Dota, You Suck.”

The coordination, strategy, and reflexes that Dota demands would be challenging enough on its own. But you have to play with nine other people—for the most part, random people, strangers, of whom most, on U.S. servers, will be white boys and probably well-off ones, considering the hardware required. Of these, at least one is likely to be non-communicative and ragey, will inevitably play like shit, and then yell at everyone else for throwing the game. You have to hope that he is on the opposing team.

When you’re playing with a good, well-coordinated team (or just playing well on your own) you can enter an almost euphoric state of competitive flow. But most of the time you’ll watch teammates—or yourself—wander aimlessly around the map, getting killed seemingly for no reason, all the while telling each other to buy wards, throw their ultimate, or stop being such noobs. And beyond the game-related insults, there is the homophobia, racism, and misogyny endemic to any space dominated by well-off white boys, who, in the case of Dota, also yelp xenophobically about the Peruvian, Filipino, or Russian players who are well-represented in the Dota community.

In other words, Dota 2 puts players in a dysfunctional and horrifying social space while offering an addictive set of opportunities to grow individual skills and exhibit mastery in competition. No wonder it can feel so familiar. No wonder there’s so much money in it.

 

IN trying to deal with depression by losing myself in meaningless activity, I stay right where I am, only a little more so. Whole lives, no doubt, can be spent in such holding patterns.

It is hard to explain to people that you are emotionally incapable of basic tasks: that you literally can’t do the laundry, can’t reply to a text message, can’t give more advance notice before cancelling. But if it’s hard to explain to others, it’s equally hard to explain to yourself. I don’t know why I can’t send an email right now, when a lot of the time it’s the easiest thing in the world.

Video games, and addictions generally, give depression an explanation: “I’m not emailing anyone because I’m spending all my time playing this game.” This is still really depressing, but at least it makes sense. You know where the time goes—you can see what happened, the hours are (depressingly) tracked in game. Without the metrics of addiction, the days just melt in a morass of incapability, a catatonic ennui that consumes your time without reason.

Addiction as a response to depression is, in a certain way, the response of a perfect capitalist subject. The system’s requisite growth depends on the generalized principle that our pleasure comes from increasing consumption: More will make us happier. The addictive impulse attempts to salvage this ideology from the disappointments it repeatedly delivers. Rather than reflecting on the fact that consuming more never provides the promised happiness, addiction just keeps upping the ante: just one more game, one more win will do it.

Addiction is thus an effort to reconcile yourself with an abusive society that makes unlimited demands of its subjects. But it gives the game away that these addictions are seen as pathological only when they make you unproductive—i.e., drinking becomes a “drinking problem” when it interferes with your work or the reproductive labor of your personal relationships. Addiction is a produced, fully anticipated response to the vicissitudes of consumer capitalism and a diagnosable pathology of legal consequence.

This makes it an incredibly effective weapon of control. Not only is addiction presupposed, but if you are not addicted in the right way, the state can intervene with punishment. Contraband drug markets (and the concomitant wars on them) produce optimal consumer-subjects while also generating a social “crisis” that allows the state to intervene and enforce the racialized, gendered, and classed stratification necessary for maximum profit production for the few who benefit from the system.

 

THE addict, then, can be recognized by her overidentification with capitalism’s ideological promises. From this vantage point, drug addiction appears as a sort of utopian version of consumption: There is no use value to drugs except enjoyment; you directly buy “pleasure.” This validates the promise that consumer goods can provide pleasure without complications, mediations, or social relations to facilitate that pleasure.

If the drug addict, in this sense, is the too perfect consumer, whose extreme consumption ultimately makes them unfit for further productivity and consumption, the video game addict is the too perfect worker. To see this, it helps to recognize how many video games are utopian work simulators: You advance and progress by getting better and better at an expanding series of repetitive gestures. As you put more time into the game, the keystrokes transition from deliberate and difficult into muscle memory, and you go from being focused on what your hands are doing to making choices on behalf of your character, eventually inhabiting the fantasy of their power and ability. The repetitive gestures become your skills, your abilities, rather than those of a diegetic avatar. You become capable of making instantaneous decisions and acting on them with maximum effectiveness.

This is the pleasure of learning, of “building knowledge,” even if done within a closed system that makes it both more reliably achieved and more meaningless. There’s a reason both marketers and game reviewers always discuss how many hours of gameplay you’re liable to get from a particular product. It is desirable to lose countless hours memorizing and studying an intricate system of rules and effects, to imagine endless combinations of outcomes of different wizard battles. That this learning occurs within the closed and technologically mediated context of a video game makes it difficult to transform the skills into something meaningful, consequential, potentially liberating or socially constructive.

Video games rechannel what would otherwise be an impulse toward real unproductivity into a form of consumption that reinforces the pleasures of work. In video games, discipline is pleasurable, designed and done for fun, and it places you into a fantastic and fictional world in which you are empowered beyond human possibilities. In the midst of gameplay, you enter that vaunted neoliberal state of flow, you achieve Malcolm Gladwell’s mastery in far fewer than 10,000 hours, you are working at something. Discipline, learning, and productivity melt together into an ecstatic experience of achievement, achievement whose pleasures are individual and internal.

This “flow” is stripped of social meaning and decontextualized from networks of power. It makes any repetitious activity—and by extension, any kind of work—capable of appearing as individual progression, creative production, skill learning, and strength building. The ease with which work, exercise, and other disciplinary tasks have been “gamified” indicates how much games are already about discipline to begin with.

Of course, there is a whole world of games that do not fit the above description, that approach games from a more surreal or liberatory or creative or philosophical angle. Games built around communal storytelling—for example the Powered-by-the-Apocalypse series of tabletop role-playing games, or the avant-garde work being done on Twine and other open-source game-development platforms—depend much less on a player’s technical or tactical mastery of gameplay constraints. Such games, by their very nature, do not structure or give way to compulsive, repetitive, addictive relationships.

MOBAs achieve the opposite. Not destructive enough to really destroy most players’ lives, nor featuring real play—the actually anarchic play that challenges your perception of the world and the way it functions—MOBAs instead funnel energy, attention, time, and money toward the quest for more perfectly epic and entertaining wizard battles: a quest whose material result is a more perfectly disciplined capitalist subject. Is it any wonder Gamer Gate drew its recruits partly from these communities?

Playing video games for 40 compulsive, depressing, and exhausting hours a week is addiction, but going to work for 40 compulsive, depressing, and exhausting hours a week is having a job. Addiction is not defined by the way you feel; it is not about levels of compulsion or willpower. It is defined by what those feelings and compulsions do to your productivity. If people with thousands of hours of gameplay on League of Legends or Heroes of the Storm maintain relationships, work, or school, then they’re not “addicts”; they’re healthy individuals with an intense hobby.

Addiction is when the pleasures to which one becomes addicted no longer smooth out capitalist relations and social reproduction but disrupt the ability to work. It is not to deny the real suffering and considerable damage that addicts and addiction can wreak to see in addiction a social demand. Is addiction a potential beginning of resistance, rather than merely individual pathology?

Perhaps. But the ways in which video-game play reproduces neoliberal subjectivity and productivity make this political transmutation of addiction almost impossible to achieve through video games. The sensation of progress, achievement, and learning in games is both genuinely pleasurable and just effortful enough to satisfy that neoliberal itch toward constant productivity, at least as long as the game is booted up and the endorphins are still pinging: Afterward, guilt sends us back to work, chastised and full of self-reproach. Indeed, the DSM-V, hardly shy about classifying new mental disorders, found there was “insufficient evidence” to include gaming addiction.

We have entered a historical period where work in the Global North feels as meaningless as it ever has. Our work isn’t making the world any better—in fact, the world is dying of our productivity. The likely political horizons, as the nation-state loses its last shreds of sovereign power in the face of global capital, are merely different cultural organizations of the police state: Do you like your fascism theocratic or liberal-humanist? Video games reflect back and mimic our work’s pointlessness. If leisure is as pointless as work, then maybe work isn’t so pointless after all. And so I just keep playing. There’s rent to be paid, after all.

Such is the nature of this addiction that even as I critique it, I’m anticipating my next game, thinking through what heroes and strategies I want to try. A good session—where I play well, win a few, and don’t play so long that I enter a zombified state—will give me enough positive feeling to significantly improve my day. A bad one does the opposite. My daily affect has come to rely on my ability to wield a computerized wizard. At least it gets me out of bed.

A gaming addiction is perfect for the lazy workaholic, too resentful of authority to actually work hard for a boss. Trapped within myself, in this insufficient individual subjectivity, a fully engaging method of wasting time is the easiest way I can quiet the insistent internal reminders that productivity is the only virtue, which has been the main cop in my head for most of my adult life. What a trap: The things that best quiet the cop make him stronger. 

Subscribe
 

Cooking Class

christine-social

Though food writing has been an elite delicacy for most of history, for a brief moment it became a middle-class staple

FOR much of history, food writing was done by the elite for the elite. This is clear from the beginning: Marcus Gavius Apicius, for example, was a Roman profligate known for the obscene amounts of money he lavished on his stomach. He also happened to have had compiled the first cookbook (or, at least, he and a number of wealthy men bearing the same name did so over several centuries). Like his fellow Romans, he disliked actual kitchen work, saving it for his slaves. But he loved to write about all things culinary.

With Apicius the mold was set. Even our more contemporary food writers were unusually privileged, if not as lavishly so. M.F.K. Fisher’s father owned newspapers. Elizabeth David was a debutante whose family had enriched themselves through land speculation and coal mining. Harold McGee studied at Yale with Harold Bloom.

Yet alongside those more privileged sorts were many writers from more unassuming backgrounds. Indeed, after five years of writing The Austerity Kitchen, my blog about alimentary culture and history, it’s hard to escape the conclusion that some of the best food writing (in the United States, at least) appeared right after the second World War, when a robust economy coupled with increased social mobility enabled more people to contribute to the genre.

A look at the biographies of the genre’s more esteemed contributors reveals as much. Take Clementine Paddleford, for example. The daughter of a Kansas farmer, Paddleford graduated in 1921 with a degree in industrial journalism. She went on to edit a women’s farm journal before moving to New York. There she would flourish, becoming the food editor of the New York Herald, the newspaper that writer Mark Singer called “the best written and best edited and, except on lousy days, the most fun.” Paddleford wrote for other publications too, and to gather material for her work she flew a Piper Cub around the country to report on America’s regional cuisines. Along the way she transformed writing about food into legitimate journalism.

Many of Paddleford’s food-writing contemporaries were just as varied. Though Calvin Trillin was a Yale grad, he was also a product of Kansas City public schools. Craig Claiborne used his G.I. Bill benefits to attend the École hôtelière de Lausanne in Switzerland. Waverley Root, newspaper man from Providence, Rhode Island, leveraged his position as a foreign correspondent to report on Europe’s finest cuisines.

But the robust and vibrant food writing culture of the last seventy years or so has, at least so far, been the exception. A look at the state of food writing in the centuries that preceded it betrays as much. The figure of Apicius dominated for quite a long time. After the long dark age that followed the sacking of all those well-stocked Roman larders, food writing, like almost all literature, remained a genre of the privileged. Indeed, it appeared largely in the form of royal cookbooks that documented the pleasures of the rich.

The authors of these cookbooks were, unsurprisingly, rich, too. In fourteenth century France, we see the flamboyant and wealthy Guillaume Tirel, otherwise known as Taillevent, compile The Victualler (Le viandier) to showcase the gustatory prowess of the first Valois kings at a time when their royal prerogative was crumbling. For his lavish descriptions of sauced lampreys and hare ragouts he was generously rewarded. While the shopkeepers of Paris groaned under onerous taxes, Taillevent accumulated ever more wealth and property. Eventually he rose to the rank of squire, his coat of arms featuring three cooking pots.

Boasting about meals seemed the perfect way of displaying power. More cookbooks appeared, all celebrating the meals of the wealthy and powerful. In 1390 the unnamed master cooks of England’s King Richard III published The Forme of Cury. Like its model, The Victualler, it features detailed descriptions of lavish dishes—almond-and-saffron mush, creamed meat and fish—as well as dishes in the shape of castles and other fanciful designs. The range of ingredients alone is impressive. Many recipes assume the reader’s pantry is well-stocked with numerous herbs and vegetables, as well as pigeons, cranes, peacocks, cygnets, rails, snipes, gulls, teals, oxen, mutton, beef, kid, deer, pork, porpoise, haddock, rays, loach, gurnards, gudgeons, crabs, carp, and whelks. Of course, only a king and his wealthy lieges could afford such ingredients.

Slowly but surely, things began to change. In the burgeoning cities of Europe, a growing middle class fell captive to the allure of food writing, and we begin to see writers of a less aristocratic heritage contribute to the genre. Sometime between June 1392 and September 1394 an elderly and wealthy townsman wrote Le Ménagier de Paris, a compendium of recipes, essays on food, and writings on other domestic matters which he intended for his fifteen-year-old bride. Between its covers is found advice on how to run a household, keep a garden, cook tasty dishes, and sexually satisfy a husband—all the worldly concerns of an emergent middle class.

Bartolomeo Platina’s On Honorable Pleasure and Health (De honesta voluptate et valetudine) appeared in print about the same time as the Ménagier, and it also addresses a relatively wealthy, yet not necessarily aristocratic, audience of citizens interested in “good health and a clean life rather than debauchery.” On Honorable Pleasure and Health bears the distinction of being the first cookbook to elaborate principles of a recognizably modern gastronomy, emphasizing everything from the importance of clean tableware and spotless linen to installing attractive seasonal decorations. Yet as innovative as these contributions were, the book was nonetheless beholden to showcasing certain markers of privilege. One particular recipe, “Peacock Cooked So It Seems to Be Alive,” recalls the spectacular feasts of medieval monarchs. Slaughtered by “dashing its feathers into its brain from above,” the fowl is filled with spices, roasted, and covered “with its own skin, so that it seems to stand on its feet.” It is then gilded “with gold lead, for pleasure and magnificence.”

The mention of such elaborate dishes reminds us just how privileged these writers were compared to the rest of society, who, as historian H.S. Bennett has noted, lived on bread, ale or cider, and pottage (a type of porridge usually consisting of peas, beans, or whatever was on hand). The dishes described in cookbooks of the time were truly fantastic, surreal events, as possible to realize for most people as the feasts of the mythical land of Cockaigne.

Our first truly modern food writer came of age when the people, having grown tired of the malnutrition that comes with having to subsist on pottage, were told to eat cake. To Jean Anthelme Brillat-Savarin we owe credit for the birth of the gastronomic essay. Like his forebears in the genre, Brillat-Savarin enjoyed a cozy existence. Born in the town of Belley to a family of lawyers, he went on to study law, chemistry, and medicine in Dijon. After a stint practicing law in his hometown he was sent in 1789 as a deputy to the Estates-General that soon became the National Constituent Assembly. There he became somewhat famous for a public speech he gave in defense of capital punishment. He inherited a vast fortune, assumed the mayoralty of Belley, and then fled France and its revolutionaries for the United States. He returned to France in 1797. Two months before his death in 1825, he published The Physiology of Taste.

After Brillat-Savarin food writing continued to mature and grow more complex. Yet for all that it remained the domain of the comfortably circumstanced, who had since grown in number. It became especially useful to the nineteenth-century American middle class. The work of food writers, many of them now forgotten, appeared in women’s magazines, offering American housewives advice on how best to serve a roast or bake a loaf of bread. Behind the cheerful, bantering prose remained a zeal for shoring up economic privilege. Women were told how to live up to a middle-class, republican ideal through preparing tasty, economical food for husbands and children. They were also told how to become better consumers of the many new appliances that had come to attend cooking. As more and more women began buying processed food, they looked to food journalism for this kind of advice. Indeed, as Elizabeth Fakazis writes, “the often symbiotic relationship between food writing, advertising, and the various food industries that continues to influence food journalism in the twenty-first century was established early on.”

As the 19th century turned to the 20th, food writing was able to disentangle itself from advertising long enough to establish itself as an important genre in its own right. The postwar economic boom allowed writers to build lucrative careers from researching and recording ­exciting ­culinary experiences. It was during this time of more broadly shared prosperity that those food writers of more humble backgrounds began to appear on the scene. But as the economic boom recedes further into memory, what do the next 30 or 40 years hold?

In food writing is reflected the sweep of Western history. From royal cookbooks to the wildly popular mass-produced series of the postwar period (think TimeLife’s Foods of the World) food writing has more or less been dependent on publishers whose brand identity and editorial style required for their maintenance that food writers adopt a conservative tone. With the ascent of digital media countless individuals began to contribute to a genre once dominated by a lucky few, introducing a wide variety of tones, voices, and sensibilities. There are now apparently more than 227 million food blogs worldwide, and many boast audiences larger than those of established print publications. 

This new food writing is inherently destabilizing; it deterritorializes in a classically deleuzoguattarian sense, transcending ideas of nationality and culture. In a food blog—or any blog, for that matter—the global nature of the Internet pervades and informs the local act of writing. This engenders new territories of knowledge. The fluid nature of the medium invites collaboration via links to other blogs, and other sorts of spontaneous, lateral connection. The potential audience for every blog post is global a priori. Readers come from every walk of life, and a user’s paths to a blog are as unique as the user herself.

The ephemerality of food blogging invites experimentation. A food blog itself can be erased in a moment or simply abandoned, in the latter case becoming what the Japanese call ishikoro, a “pebble.” Or it can be contributed to for years, accumulating thousands of posts. An absence of constraint marks the platform, which encourages testing of new ideas. I look at my own bookmarks and see blogs on everything from living on wartime rations (the1940sexperiment.com), on offering a historic menu each day (theoldfoodie.com), to showcasing cross-sections of, well, hundreds of candy bars (scandybars.com). “Nothing is beautiful or loving or political,” said Deleuze, “aside from underground stems and aerial roots, adventitious growth and rhizomes.” The rhizomatic nature of food blogging ensures much of it can be beautiful, loving, and political.

I believe all these things to be positive developments, and I don’t believe we should seek to turn back the clock by reviving the decorous style of food writing past. My own blog owes crucially to its freedom from the constraints of print culture, to its amenability to inclusion of images and citations from disparate sources. I cannot imagine how it could be translated to print. Yet its dependence on new media comes at a cost.  If many of my Austerity Kitchen entries tend to focus on the 19th century, it must be because my consciousness to some extent has been shaped by the neoliberal moment, which, for all its future-forward pretense, simply marks a return to 19th century economics. And so without a robust publishing industry (and few would argue that consolidation of publishing houses and death of print publications has been a good thing for writers), how do people who lack inherited wealth or similar financial means find the time and energy to make a meaningful contribution? 

We need to find a way to make this new model of writing and publishing financially viable for writers without resurrecting the monolithic, exclusionary nature of old media. If we don’t, food writing will once again be brag sheet about gustatory exploits, a genre in which the ­Apiciuses and King Richards of the world may crow about their lavish feasts. This would be a shame, because the genre holds much promise for experimentation and offers room for new voices. Something new has finally appeared on the menu. Let’s do what we can to make sure it becomes a signature dish. 

Subscribe