facebook twitter tumblr newsletter

Cooking Class


Though food writing has been an elite delicacy for most of history, for a brief moment it became a middle-class staple

FOR much of history, food writing was done by the elite for the elite. This is clear from the beginning: Marcus Gavius Apicius, for example, was a Roman profligate known for the obscene amounts of money he lavished on his stomach. He also happened to have had compiled the first cookbook (or, at least, he and a number of wealthy men bearing the same name did so over several centuries). Like his fellow Romans, he disliked actual kitchen work, saving it for his slaves. But he loved to write about all things culinary.

With Apicius the mold was set. Even our more contemporary food writers were unusually privileged, if not as lavishly so. M.F.K. Fisher’s father owned newspapers. Elizabeth David was a debutante whose family had enriched themselves through land speculation and coal mining. Harold McGee studied at Yale with Harold Bloom.

Yet alongside those more privileged sorts were many writers from more unassuming backgrounds. Indeed, after five years of writing The Austerity Kitchen, my blog about alimentary culture and history, it’s hard to escape the conclusion that some of the best food writing (in the United States, at least) appeared right after the second World War, when a robust economy coupled with increased social mobility enabled more people to contribute to the genre.

A look at the biographies of the genre’s more esteemed contributors reveals as much. Take Clementine Paddleford, for example. The daughter of a Kansas farmer, Paddleford graduated in 1921 with a degree in industrial journalism. She went on to edit a women’s farm journal before moving to New York. There she would flourish, becoming the food editor of the New York Herald, the newspaper that writer Mark Singer called “the best written and best edited and, except on lousy days, the most fun.” Paddleford wrote for other publications too, and to gather material for her work she flew a Piper Cub around the country to report on America’s regional cuisines. Along the way she transformed writing about food into legitimate journalism.

Many of Paddleford’s food-writing contemporaries were just as varied. Though Calvin Trillin was a Yale grad, he was also a product of Kansas City public schools. Craig Claiborne used his G.I. Bill benefits to attend the École hôtelière de Lausanne in Switzerland. Waverley Root, newspaper man from Providence, Rhode Island, leveraged his position as a foreign correspondent to report on Europe’s finest cuisines.

But the robust and vibrant food writing culture of the last seventy years or so has, at least so far, been the exception. A look at the state of food writing in the centuries that preceded it betrays as much. The figure of Apicius dominated for quite a long time. After the long dark age that followed the sacking of all those well-stocked Roman larders, food writing, like almost all literature, remained a genre of the privileged. Indeed, it appeared largely in the form of royal cookbooks that documented the pleasures of the rich.

The authors of these cookbooks were, unsurprisingly, rich, too. In fourteenth century France, we see the flamboyant and wealthy Guillaume Tirel, otherwise known as Taillevent, compile The Victualler (Le viandier) to showcase the gustatory prowess of the first Valois kings at a time when their royal prerogative was crumbling. For his lavish descriptions of sauced lampreys and hare ragouts he was generously rewarded. While the shopkeepers of Paris groaned under onerous taxes, Taillevent accumulated ever more wealth and property. Eventually he rose to the rank of squire, his coat of arms featuring three cooking pots.

Boasting about meals seemed the perfect way of displaying power. More cookbooks appeared, all celebrating the meals of the wealthy and powerful. In 1390 the unnamed master cooks of England’s King Richard III published The Forme of Cury. Like its model, The Victualler, it features detailed descriptions of lavish dishes—almond-and-saffron mush, creamed meat and fish—as well as dishes in the shape of castles and other fanciful designs. The range of ingredients alone is impressive. Many recipes assume the reader’s pantry is well-stocked with numerous herbs and vegetables, as well as pigeons, cranes, peacocks, cygnets, rails, snipes, gulls, teals, oxen, mutton, beef, kid, deer, pork, porpoise, haddock, rays, loach, gurnards, gudgeons, crabs, carp, and whelks. Of course, only a king and his wealthy lieges could afford such ingredients.

Slowly but surely, things began to change. In the burgeoning cities of Europe, a growing middle class fell captive to the allure of food writing, and we begin to see writers of a less aristocratic heritage contribute to the genre. Sometime between June 1392 and September 1394 an elderly and wealthy townsman wrote Le Ménagier de Paris, a compendium of recipes, essays on food, and writings on other domestic matters which he intended for his fifteen-year-old bride. Between its covers is found advice on how to run a household, keep a garden, cook tasty dishes, and sexually satisfy a husband—all the worldly concerns of an emergent middle class.

Bartolomeo Platina’s On Honorable Pleasure and Health (De honesta voluptate et valetudine) appeared in print about the same time as the Ménagier, and it also addresses a relatively wealthy, yet not necessarily aristocratic, audience of citizens interested in “good health and a clean life rather than debauchery.” On Honorable Pleasure and Health bears the distinction of being the first cookbook to elaborate principles of a recognizably modern gastronomy, emphasizing everything from the importance of clean tableware and spotless linen to installing attractive seasonal decorations. Yet as innovative as these contributions were, the book was nonetheless beholden to showcasing certain markers of privilege. One particular recipe, “Peacock Cooked So It Seems to Be Alive,” recalls the spectacular feasts of medieval monarchs. Slaughtered by “dashing its feathers into its brain from above,” the fowl is filled with spices, roasted, and covered “with its own skin, so that it seems to stand on its feet.” It is then gilded “with gold lead, for pleasure and magnificence.”

The mention of such elaborate dishes reminds us just how privileged these writers were compared to the rest of society, who, as historian H.S. Bennett has noted, lived on bread, ale or cider, and pottage (a type of porridge usually consisting of peas, beans, or whatever was on hand). The dishes described in cookbooks of the time were truly fantastic, surreal events, as possible to realize for most people as the feasts of the mythical land of Cockaigne.

Our first truly modern food writer came of age when the people, having grown tired of the malnutrition that comes with having to subsist on pottage, were told to eat cake. To Jean Anthelme Brillat-Savarin we owe credit for the birth of the gastronomic essay. Like his forebears in the genre, Brillat-Savarin enjoyed a cozy existence. Born in the town of Belley to a family of lawyers, he went on to study law, chemistry, and medicine in Dijon. After a stint practicing law in his hometown he was sent in 1789 as a deputy to the Estates-General that soon became the National Constituent Assembly. There he became somewhat famous for a public speech he gave in defense of capital punishment. He inherited a vast fortune, assumed the mayoralty of Belley, and then fled France and its revolutionaries for the United States. He returned to France in 1797. Two months before his death in 1825, he published The Physiology of Taste.

After Brillat-Savarin food writing continued to mature and grow more complex. Yet for all that it remained the domain of the comfortably circumstanced, who had since grown in number. It became especially useful to the nineteenth-century American middle class. The work of food writers, many of them now forgotten, appeared in women’s magazines, offering American housewives advice on how best to serve a roast or bake a loaf of bread. Behind the cheerful, bantering prose remained a zeal for shoring up economic privilege. Women were told how to live up to a middle-class, republican ideal through preparing tasty, economical food for husbands and children. They were also told how to become better consumers of the many new appliances that had come to attend cooking. As more and more women began buying processed food, they looked to food journalism for this kind of advice. Indeed, as Elizabeth Fakazis writes, “the often symbiotic relationship between food writing, advertising, and the various food industries that continues to influence food journalism in the twenty-first century was established early on.”

As the 19th century turned to the 20th, food writing was able to disentangle itself from advertising long enough to establish itself as an important genre in its own right. The postwar economic boom allowed writers to build lucrative careers from researching and recording ­exciting ­culinary experiences. It was during this time of more broadly shared prosperity that those food writers of more humble backgrounds began to appear on the scene. But as the economic boom recedes further into memory, what do the next 30 or 40 years hold?

In food writing is reflected the sweep of Western history. From royal cookbooks to the wildly popular mass-produced series of the postwar period (think TimeLife’s Foods of the World) food writing has more or less been dependent on publishers whose brand identity and editorial style required for their maintenance that food writers adopt a conservative tone. With the ascent of digital media countless individuals began to contribute to a genre once dominated by a lucky few, introducing a wide variety of tones, voices, and sensibilities. There are now apparently more than 227 million food blogs worldwide, and many boast audiences larger than those of established print publications. 

This new food writing is inherently destabilizing; it deterritorializes in a classically deleuzoguattarian sense, transcending ideas of nationality and culture. In a food blog—or any blog, for that matter—the global nature of the Internet pervades and informs the local act of writing. This engenders new territories of knowledge. The fluid nature of the medium invites collaboration via links to other blogs, and other sorts of spontaneous, lateral connection. The potential audience for every blog post is global a priori. Readers come from every walk of life, and a user’s paths to a blog are as unique as the user herself.

The ephemerality of food blogging invites experimentation. A food blog itself can be erased in a moment or simply abandoned, in the latter case becoming what the Japanese call ishikoro, a “pebble.” Or it can be contributed to for years, accumulating thousands of posts. An absence of constraint marks the platform, which encourages testing of new ideas. I look at my own bookmarks and see blogs on everything from living on wartime rations (, on offering a historic menu each day (, to showcasing cross-sections of, well, hundreds of candy bars ( “Nothing is beautiful or loving or political,” said Deleuze, “aside from underground stems and aerial roots, adventitious growth and rhizomes.” The rhizomatic nature of food blogging ensures much of it can be beautiful, loving, and political.

I believe all these things to be positive developments, and I don’t believe we should seek to turn back the clock by reviving the decorous style of food writing past. My own blog owes crucially to its freedom from the constraints of print culture, to its amenability to inclusion of images and citations from disparate sources. I cannot imagine how it could be translated to print. Yet its dependence on new media comes at a cost.  If many of my Austerity Kitchen entries tend to focus on the 19th century, it must be because my consciousness to some extent has been shaped by the neoliberal moment, which, for all its future-forward pretense, simply marks a return to 19th century economics. And so without a robust publishing industry (and few would argue that consolidation of publishing houses and death of print publications has been a good thing for writers), how do people who lack inherited wealth or similar financial means find the time and energy to make a meaningful contribution? 

We need to find a way to make this new model of writing and publishing financially viable for writers without resurrecting the monolithic, exclusionary nature of old media. If we don’t, food writing will once again be brag sheet about gustatory exploits, a genre in which the ­Apiciuses and King Richards of the world may crow about their lavish feasts. This would be a shame, because the genre holds much promise for experimentation and offers room for new voices. Something new has finally appeared on the menu. Let’s do what we can to make sure it becomes a signature dish. 


The Birth of a Beauty Criticism


Appearance is no longer just a topic for fashion ads and how-to guides

WHEN I first started my blog The Beheld, I spent a good deal of time clicking refresh on one particular website. It was Beauty Schooled (now in archive form), a blog by journalist Virginia Sole-Smith, who was chronicling her experience as a student at “Beauty U,” a cosmetology school where she and fellow enrollees spent 600 hours learning the ins and outs of the beauty trade. Sole-Smith never intended to become a practicing aesthetician; her goal was to understand the beauty industry as an insider and then couple that with her reporting skills and feminist sensibility to fill in readers on what she terms “the human cost of beauty.” She also scooped the New York Times on nail salon labor issues by eight years.

The blog was insightful, funny, and thoroughly ­engaging. It was also, at the time, the only ongoing body of beauty criticism I could find. There were individual essays and reported articles critiquing beauty culture and, of course, books on the matter. But this was 2011, years into the blog explosion, yet I couldn’t find anyone besides Sole-Smith who was primarily examining this thing that took up so much of my own mental real estate. There were straightforward beauty bloggers and vloggers aplenty—women (they were nearly all women) sharing product reviews, tutorials, and “haul videos.” There were also bloggers devoted to critiquing specific angles of beauty: beauty chemists demystifying the science of skin care, or natural beauty bloggers commenting on nontoxic products, each of which lent itself to a roundabout critique of beauty, but with built-in limitations.

As far as I could tell, though, it ended there. And so, after I’d pack up my writing and scholarly reading for the day but still crave a fix, there I’d sit at Beauty Schooled, clicking refresh, just hoping that the only other person out there I could find who was devoted to looking at beauty with an eye not unlike mine—critical of beauty culture but not immune to reveling in it, feminist yet neither dismissing beauty as a patriarchal construct nor embracing it as a pop-feminist act of “I choose my choice!”—had posted something new in the past thirty seconds. Click, refresh, please.

Continue Reading

Manual Override


The history of sabotage is the history of capitalism unmaking itself

And if linesmen make connections,
can’t you make dis-connections?

—Guy Bowman to telephone company workers,
The Syndicalist, 1913

IN the fall of 1987, on Ohio’s Wright-Patterson base, Captain Howard L. Douthit III submitted his master’s thesis to the School of Systems and Logistics at the Air Force Institute of Technology. “The Use and Effectiveness of Sabotage as a Means of Unconventional Warfare—An Historical Perspective From World War I Through Viet Nam” is a generally pedestrian work, the kind to expect from an officer who peppers his acknowledgments page with no less than four Bible passages. For Douthit’s project, sabotage adheres to a narrow and strictly martial definition: “clandestine act(s) of a person(s) to destroy, or render inoperative, enemy combat equipment, support equipment, facilities, and/or utilities, to include human and natural resources, used to support aggression while not being actively used in an aggressive manner at the time of the act.” His eventual conclusion is that, in the final instance, “history supported the thesis that sabotage is an effective means of warfare.”

All the same, the study can surprise, throwing off whiffs of something stranger, especially in its passing accounts of startling technical inversions, like the Polish anti-Nazi partisans who converted fire extinguishers into flamethrowers. And toward the end of the book, while dryly enumerating his “lessons learned,” Douthit inadvertently stumbles onto one of the crucial logics of sabotage, far beyond the dynamited train lines and sharpened bamboo that occupy most of the text:

5. History does not point to an effective countermeasure to sabotage.

The intended meaning is plain enough: the history of human warfare is largely one of the success of fighting dirty, of how regiments, special forces, civilians, guerrillas, and insurgents simply can’t be stopped when they ignore the parameters that might delineate war from daily life. Still, there’s another sense to this “lesson.” That is countermeasure, which has a precise meaning in military operations: the measures deployed to break the bond between a weapon and its target, either actively (interfering with the capacity of the weapon to identify or reach its target, like dropping a metallic cloud of confetti that the missile mistakes for a jet) or passively (making the target hard to identify). In short, the target dissimulates without fleeing or vanishing. Ground slips into figure, surface into threat, and the grid goes dark. An octopus disappears “into the night, but it is a night which it can itself secrete.”

We might also see camouflage in these terms, especially in its dazzle forms, where something doesn’t pretend to be absent from the scene but instead becomes a stain in the visual field, a floating migraine that can never get properly locked onto. As Jared Sexton and Steve Martinot put it, in the context of how media spectacles of police violence occlude the “banality of police murder as standard operating procedure,”

Spectacle is a form of camouflage. It does not conceal anything; it simply renders it unrecognisable. One looks at it and does not see it. … Camouflage is a relationship between the one dissimulating their appearance and the one who is fooled, who looks and cannot see.

The key point is that that dissimulating countermeasures don’t conceal from perception in general. They only make something invisible to the structure of recognition operative for what tries to locate and abolish the target. That “banality of police murder,” for instance, is not unrecognizable as such. It is amply known and lived. It is unrecognizable for a liberal mode of recognition, however nominally progressive, that can only see anti-black violence as an exceptional event, rather than a constitutive ground of American society.

Countermeasures are the inside of seeing, markers of and deviations within the fact that sight is already targeting, even before it pairs with a weapon. In San Diego, I met a painter who told me about his father, who was colorblind. During the American war on Vietnam, this supposed limitation was seized on by the U.S. military because it meant he could see through camouflage, the majority of which is designed to deceive through chromatic similarities. He could not register these, and so the undergirding patterns—a right angle, a straight line, everything that betrays the industrial—became evident. He spent his war days leaning out the helicopter’s side, staring down at the jungle and forest as it rushed below, codebreaking sight. The image has obsessed me since: the soldier who has been transformed into a pattern recognition machine, a countermeasure launched against a countermeasure. It is a fundamental image of sabotage, not as a rebellious choice or destructive act, but as a line of contested negotiation between the technical and the human, a negotiation whose stakes couldn’t be higher.


ONE of the reasons why “history does not point to an effective countermeasure to sabotage” is because the history of sabotage is itself a history of countermeasure. Sabotage weaves a minor and inconstant arc through the surveillance, management, and design of human activity and its inhuman sites and interfaces. Counter to Douthit’s specifically martial sense, we might consider sabotage, at the most abstract level, as the deployment of a technique, or activation of a capacity, at odds with the apparatus, system, or order within which it is situated and for which it was developed. Incompatible with a model of cleanly delineated means and ends, sabotage takes procedures as always in potential excess to plans—that is, to structures that, first, articulate the link between a projected possibility and what actually gets produced and, second, establish conditions for what will be visible, how it will count, and what will support it. Less abstractly, sabotage also means putting vinegar on the loom, doubt in the smile, glass in the motor, milk in the bearings, shit on the spikes, sand in the soup, and worms in the code. Being too thorough and too careless, tightening just a hair too much and too little, having seriously, oh my God, no idea how this could have happened—and having no one able to prove it otherwise.

And yet sabotage is more than all these instances, which overly stress a kind of volition, an active principle that puts the focus on a saboteur. Because what distinguishes sabotage above all isn’t any sense or principle of deviance, especially given that such operations have no inherent “politics,” available across the political spectrum and to companies and corporations themselves. Rather, it’s what Elizabeth Gurley Flynn, the IWW organizer and most interesting theorist of sabotage to date, called in 1916 the “fine thread of deviation”: the impossibly small difference between exceptional failures and business as usual, connected by the fact that the very same properties and tendencies enable either outcome. If we are to think of sabotage as a process that negates productivity, it’s a negation that can’t be disentangled from the structures of productivity itself.

So what I mean by sabotage differs greatly from Douthit’s sense. His is best understood as the result of a quite particular transformation of the concept that occurred over the early 20th century, furthered by attempts by left-wing parties to cancel its spread and by states to jail its advocates. That crackdown was itself the attempt to shift sabotage away from working within that contested territory of negation which never steps out into the open. Driven by largely successful attempts to criminalize and denounce it, the meaning of sabotage moved towards instead marking the literal and material destruction, especially of machinery, goods, and infrastructure, of what otherwise works fine.

As for its earlier form, it remains constant, in at least two senses. First, acts of what can be clearly labelled sabotage, like Foxconn interns reducing PS4s to very expensive flashlights, still happen as recurrently as forms of counterproductivity often excised from consideration as “political,” from informal birth strikes to the slave in Natchez, Mississippi, who, in 1856, steered a carriage off the road on the way to a wedding and “accidentally” injured the slaveholders inside.

Second, the official demonization of sabotage, pushed across the board from Supreme Courts to Communist Party leaders to cops to unions, has been present from the start, making it arguably one of the century’s most disavowed political concepts. It has been decried as sneaky, unfair, individualist, unproductive, wasteful, and chaotic, a cowardly shadow of collectivity. But there’s been no greater attack on it than from the political organizations that nominally represented those who often carried out acts of sabotage, with parties and groups of a ­socialist bent most consistently finding the act both morally and strategically abhorrent.

For them, sabotage was either a weapon that “belongs in the arsenal of anarchism” (James P. Cannon) or some version of drunken sailor or broken sextant that would help run “the Labour Movement to disaster on the rocks of Anarchism” (George Harvey). For others, it was “born of the want of sound knowledge and strong organization.” It is technophobic and retrograde, “a reactionary vestige of the ancien régime which society should abolish” (Georges Sorel). It is indefensible: “There are workers we’ll never defend: those who smash machines or cars they manufacture” (Official French CP leader statement, 1970). Even the original name of the Cheka—the Bolshevik “Emergency Committee”/state security apparatus—was the unwieldy “All-Russian Emergency Commission for Combating Counter-Revolution and Sabotage.” In other words, barely three decades after Flynn poses sabotage as a crucial component of working class struggle, this form of dissent was so disavowed as to become the named partner of counter-revolution in derailing state socialism.

None of this is incidental. Sabotage contravenes some of the fundamental suppositions that underpin what has been meant by political, across a wide spectrum. In particular, it cuts against a base insistence on being present. According to those lines of thought, sabotage’s unrepresentable modes of shadowy, deferred, and distributed agency could only ever have been cheating, a petty turbulence with no strategic end. Yet even in its denunciation, sabotage constitutes a key lens onto the last two centuries, revealing the tight metapolitical strictures—i.e. what was allowed to even count as political in the first place—that underwrote even allegedly radical currents and their complicity with long waves of colonization, accumulation, and management. And it’s no accident that the term emerges onto record in late 19th century France, because sabotage doesn’t designate something that humans have done all along, even if forms of invisible resistance have. What sabotage names is specific and internal to capitalism as a lived historical form, able neither to be cheered nor expunged.

• • •

I have not given you a rigidly defined thesis on sabotage because sabotage is in the process of making.

—Elizabeth Gurley Flynn

AS for the term itself, most accounts make two things clear. First, it was not invented by anyone in particular, instead already circulating as slang; and second, it was derived from the sabot, the wooden clog made from a single piece of wood that leads to the most widely-held origin for the word, that of throwing the shoe in the gears. But that origin is false. A hyper-visible refusal of work and image of destruction is far closer to frame-breaking in the early nineteenth century than what slowly became known as sabotage.. The more relevant sense of the word’s emergence seems to be that noted by Emile Pouget, the most vocal early advocate of the tactic, who claims that:

Up to fifteen years ago the term sabotage was nothing but a slang word, not meaning “to make wooden shoes” as it may be imagined but in a figurative way. To work clumsily as if by sabot blows.

The situation thickens, too, when we join this association of trollishly pounding away with a clog-hammer to those who actually wore the sabots in such settings, the recently proletarianized farm workers who didn’t have leather shoes—the marker of having transitioned to urban living and relative security—and hence clattered around in sabots as they badly performed work to which they weren’t yet accustomed. Across the various definitions, it is this sense that becomes pivotal. The noun names an act and a process, the point of which is to work badly and, above all, to not be fully subsumed to the process of labor—even as so doing invites unexpected collusion and comparison with the degraded products of such work.

One of the key departure points for advocating sabotage, rather than just doing it, came from Glasgow dockworkers in 1889. On the back of a sailor’s and fireman’s union strike, the recently-formed National Union of Dock Labourers joined in the strike in June, including the Glaswegian workers managing the significant port. Immediately, we can detect a fracture within the traditional image of sabotage as a form of unskilled industrial destruction, as this primary instance for how people theorized it is not easily replaceable work on the assembly line. It was based instead in a node of circulation, a port that formed a chokepoint in the transition between production, distribution, and consumption. Moreover, it was specifically skilled work. This became amply clear when to break the strike, they were replaced by scabs from around the United Kingdom with no prior history of dockwork, pulled especially from farms—the saboted, one could say. As expected, the scabs worked badly, dropping crates, breaking wine barrels—so badly, in fact, that one fell into the sea while wheeling cargo across a plank and drowned.

Upon losing the strike, the workers were not only told to get back to it but also snarkily “advised in the organs of the shipowners ‘to take a few lessons in political economy…’ ” They responded quite literally, writing a remarkable text framing their decision to sabotage:

Having mastered all the mysteries of the doctrine of value and the distinction between “value” and “price,” we were made familiar with the multitudinous forms of orthodox adulteration from jerry buildings and coffin ships to watered milk and shoddy clothes. With only one exception we found the all-prevailing practice to be this, that the “QUALITY” of each commodity, where it be a dwelling-house, a suit of clothes, or a Sunday’s dinner, is regulated according to the price which the purchaser is willing to pay — the one exception being labour.

And so, rather than asserting that labor deserves to be special, celebrated or honored, they decide instead that, “there is no escape [from this structure] except to adopt the situation and apply to it the commonsense commercial rule which provides a commodity in accordance with the price.” That is, to offer a commodity—labor—whose quality has been adjusted to fit its price, its productivity adulterated with feigned clumsiness, the work carefully degraded into an image of un-skill.

The turn is striking, in the context of the overriding tendencies of late nineteenth-century labor movements particularly and the grounding frameworks of class struggle politics more generally. Because in both regards, one of the near constants was to avoid denouncing work itself, especially skilled waged work around which one could organize by trade and collectively bargain. For the tradition of the labor movement, the capacity to work was to be valorized as an expression of that “form-giving fire” (Marx), one’s distinctly human ability to transform the world and hence be more than watered milk or coffin ship, more than just one input amongst others. It’s unsurprising, then, that when Jean Jaurès denounces sabotage in 1907, it is because “sabotage is repugnant to the nature and tendencies of the working class. Sabotage is loathsome to the technical skill of the worker, the skill which represents his real wealth.” We should note that this discourse equally defines itself through its nominal outside. From the late nineteenth to early twentieth century, the very decades of sabotage’s formation, proponents of criminal anthropology, such as Cesare Lombroso, both naturalize incarceration of “unproductive” members of society and advance racist theories of civilizational development whereby colonized societies are defined by their “horror laboris” (horror of labor), a pseudo-scientific term coined to describe an indolent and agitated energy manifest in those who lack the willingness to put it to properly capitalist ends.

The Glasgow action threatens to upend this image of work as one’s innate quality and source of pride by taking the adulterated as its point of reference. To protect your status as a skilled worker, you must act like nothing separates that work, and hence you, from any other shit commodity made to participate in the circulation of capital. You must act the scab, take no pride, and pretend to be functionally incompetent. Both a tactic of dissimulation and a deeply anti-authentic conception, it suggests a break with the triple bond insisted on by the majority of labor movements in that time and to come: the fundamental importance of labor; a useful civic and family life; and the willingness to appear as such (both proud of work and defined by that work), willing to stand up and be counted.

This dissimulation of a work-centered identity forms one key strand of what, in the next century, made sabotage so unacceptable to political organizations whose members nevertheless kept “accidentally” blowing it. Another equally crucial strand can be seen in their political economy “lesson” itself, which moves from reading scab labor in terms of a fairly valued commodity (good pay, good work; bad pay, bad work, or, “you cutta da pay, we cutta da shob,” or, as Gurley Flynn puts it, “an unfair day’s work for an unfair day’s wage”) to start identifying with those adulterated and fraudulent commodities themselves: “If employers of labour or purchasers of goods refuse to pay for the genuine article they must be content with veneer and shoddy.” It’s a key turn, in that it opens a possibility both of beginning from an experience of work that most people have—a hell of degradation, boredom, and coercion, with greater and lesser degrees of explicit violence backing this—and of shifting an understanding of capitalism away from the centrality of the wage, and officially waged sites of production, to an interchange amongst a coordinated yet often incoherent circuit marked by failure, waste, fragility, and breakdown.

The Wobbly rhetoric that will pick up on these European threads continues this line of thought in the U.S. Consider the “Jersey Justice” pamphlet from 1913 on the Patterson silk mill strike:

Every worker who is a cog in the great modern machine of mill, factory, mine, workshop or railroad knows from his daily experience just what all this means. Any worker knows that the entire factory can be thrown into confusion at any minute if even one of the necessary cogs is thrown out of gear.

So if the waged worker deserves pride of place in this schema, it isn’t because they are unique, a human vitally yearning to be delivered from the machinic confines. No, it’s because as a “cog,” and hence totally subsumed into this “great modern machine,” they have a cog’s-eye view of the process from within. They know how to throw it into confusion because they know which other cogs are necessary, which are most subject to amplifying their failure without being immediately detected. And as the history of sabotage shows, from care work to cooking to data entry to plumbing, this isn’t knowledge abstracted from its site. It’s the mark of an intimate and highly practical understanding of a system and its abstractions, the awareness that comes, often literally, from handling and grasping, cleaning and traversing, and having to attend to all the small errors, frictions, lags, and glitches in a system envisioned to function smoothly, if not automatically.

The possible consequences of this kind of comparison are significant, even as they get continually shoved to the side in favor of a politics based around a model that joins the military (open engagement), the civic (public representation), and the theatrical (experience delineated into those who act and those who watch). In particular, such comparisons suggest that the total process of capitalist rationalization—in transportation, reproduction, manufacturing, war, service, friendship, and lived spaces alike—does more than seek to neutralize the dissent, frustration, and rage of those whose lives are transformed by it. It also opens up capacities for explosive disruption through an unprecedented interchange between those “cogs,” newly situated in circulatory networks that streamlined the translation between money and commodities yet also introduced a unique fragility available for exploit, a “tight coupling” (Charles Perrow) by which failure at one point precipitates failure elsewhere, whether within a single school or across a rail network. A conspiracy touched off by human hands, thanks to a minor inflection or willful error, yet carried out in full by a chain of subsequent mishaps, blow-outs, and spills, becomes newly possible.

Possible, but soon to be legally punishable: from French anti-strike law in 1912 to the 1919 Criminal Syndicalism Law of California, the 1910s would see sabotage enter written law under the guise of purely physical damage to the already made and owned. In so doing, a fundamental disruption of time (i.e. the arrangement of materials, living persons definitely included, toward a production of time as value-producing) was collapsed into a clear destruction of property. And that collapse was itself to quite literally become law by the mid-’20s, when the Supreme Court ruled on the case of William Burns, an IWW member arrested in 1923. Burns was convicted on the basis of having Wobbly propaganda on him and urging the disruption of logging in Yosemite, not by destroying anything per se but by the “loading of a ship in such a way that it took a list to port or starboard and therefore had to limp back to port.” In his first trial, the judge ruled that while the statute “denounces sabotage as meaning willful and malicious physical damage or injury to physical property,” “I instruct you that under the definition as laid down by the Legislature of California that any deliberate attempt to reduce the profits in the manner that I have described would constitute sabotage.” The ruling was upheld by the Supreme Court in 1927. In other words, even as the case openly admits a gap between causing a slow down and causing material damage, it treats any attempts to interfere with the rate of profit as itself a kind of material damage. The substance of things are constructed as identical to their potential, any particularity or future variance flattened in the service of total fungibility, efficiency, and assurance of unimpeded flow. What does not circulate is and will be criminal.

• • •

a clean factory is not receptive to fire
but a dirty one
make sure it starts to burn only after
you have walked away
—Ida Börjel, Miximum Ca’ Canny The Sabotage Manuals you cutta da pay, we cutta da shob

IT is the uncertainty between something that is and something that might be that most comes to shape the future uses of sabotage. For the most part, those adoptions all take up sabotage as a loose synonym for unsanctioned and undeclared destruction of productivity (however defined). More precisely, sabotage comes to mark an understanding that disruption of everyday, “neutral” processes should be treated as a form of violence, and that sabotage is a transposable mode of social violence which advances itself by targeting just those processes. This future of sabotage from the mid-’20s onward is the path for another essay, but a brief sketch gives a sense of that longer work.

The main application of the idea from roughly the 1920s to the 1960s, although continuing to carry this sense still today, was martial, both within decolonization struggles and interstate wars, where it comes to name covert attacks on transport, communication, supply lines, and other infrastructural elements that contribute to the occupation or war effort of a state or army, even if “not used in an aggressive way” (Douthit) at the time of attack. But even this farcical attempt to imagine war as open, declared, and putatively symmetrical combat, to which sabotage would be the unfair exception of attacking without being seen or targeting a territory’s non-“aggressive” systems, ultimately serves to suggest the opposite: that no clean line of distinction can be drawn between war and capitalism in the first place.

Toward the end of the World War II, when victory began to look more likely, French partisans confronted a dilemma. If they did not sabotage the roads, wires, tunnels, transmitters, and so on that made up the French transport and communication networks, those could still be used by the Nazis and the Vichy government. If they did sabotage these, they would damage the capacity for French industry, let alone daily life as usual, to redevelop after the war. That polyvalence returns with a vengeance because the materials are the same, both part of normal state functioning and exceptional elements used to sustain a war effort. And so, especially in the contexts of decolonization, martial sabotage reveals how everything is potentially, if not functionally, in the service of a ruling power, whether embodied or abstract. Everything that is functional is complicit, and we can’t separate landscape from “threatscape,” the term given in the wake of military affairs’ infrastructural turn to designate the extension of theaters of war to include all elements of the built world. In that regard, the literal weaponization of the landscape, like diverting heavy rains to wash away a supply road, is only the most visible limit of an overall blurring that erases any clear division between the technical, the social, and the openly hostile, a situation wherein effects come undone and cannot be traced back to any one source, let alone one side.

A second zone where sabotage becomes a key concept, starting around the sixties, is the increasing importance of human resource management (itself marking a shift away from “personnel management”). Part of the importance of sabotage in that discourse unsurprisingly surrounds concerns over efficiency and boredom, given that “changing the time on the punch clock, or pulling the fire alarm may add just the right level of excitement to an otherwise boring day.” (It sure doesn’t hurt.) Yet it also marks a long creeping awareness, one that become a crucial counter-story to celebrated cybernetic developments, that automation is never actually automatic, even as one of the elements it tries to reduce is precisely the prospect of sabotage. It requires that even as one is deskilled and reduced to a mere executant of a plan, one violate that plan in order to preserve the illusion of its adequacy, requiring that cog’s-eye-view that lets complex systems be repaired on the fly while also leaving them open to silent disruption. This dynamic is itself paired with a project of human resources departments to treat productivity at its alleged source, that of a complete person who supposedly, if made to feel welcome as a “team member”/associate/partner/“sandwich artist,” would be less likely to fill the copier with honey.

In extending productivity measures to the person as a whole, whereby the self becomes a site of work not only for the labor of self-reproduction but also a project and product to be optimized, biometrically tuned, and circulated as image, the idea of sabotage receives its final twist: that of “self-sabotage,” a buzzword stalking the blasted earth of self-help rhetoric. As in, “3 Steps to Stop Sabotaging Yourself”:”Do you have a talent for self-sabotage? (Sure, you’re on a diet, but another doughnut won’t kill you, right?)” Or: “Why ‘self-sabotage could be ruining your career.” This belies more than the well-known shift of value production away from a clearly delineated working day. It also suggests that the slow dissemination of sabotage, as a concept, has itself tracked along shifts in the organization not only of capitalism itself but also of its self-narratives, roaming out from industrial waged work as central source of productivity to military contestations over access to territory and energy resource to corporate and office culture to the global subject of flexible accumulation.

In each site, sabotage helps identify both their specific failures—as well as possibilities for disruption—and their fundamental incompleteness as a frame for describing what really happens and how people navigate it, all the way out from that story of pride in one’s work to an image of war as a clarity of division to the prospect of the self as frictionless gyre of value. Sabotage helps to keep marking the incompleteness of those narratives, because even if capital works in large part by opening up a functional analogy between discrete things that allows for the potential exchange of all with all, people get caught in its crossfire at profoundly different levels of abjection, levels that unsurprisingly have so much to do with both constructions of race and gender and long geopolitical histories.

It’s not surprising in this regard how if one side of “self-sabotage” fits easily with hot yoga and motivation seminars for maximizing the entrepreneurial spirit within, the other is linked both to bodily shame (“another donut”) and to racial subjection, especially visible in American anti-blackness and its emphasis on counterproductivity. For instance, when a narrative tries to locate rates of incarceration, “achievement” levels, and poverty as a problem of “victimology” that can be worked on as a project, it becomes framed as a problem of “Self-Sabotage in Black America,” as in the title of John McWhorter’s conservative screed.

This is hardly new. Sabotage has always marked that indistinct line between refusal and degradation, between forms of what gets hailed as politics and what stays outside the limits of visibility that allow that designation. In many ways, we could invert a story of sabotage, taking the cue of its shift away from the factory to locate it equally in forms of veiled countermeasure and attention that responded to fundamentally different situations. One might begin, for instance, with what Simone Browne calls the “dark sousveillance” of resistance to slavery, which “speaks not only to observing those in authority (the slave patroller or the plantation overseer, for instance) but also to the use of a keen and experiential insight of plantation surveillance in order to resist it.”

Still, in order to grasp how sabotage has designated a tendency always in excess of the sites and identities of waged work, and hence could only be vehemently policed, it’s worth seeing how it threatened to undermine them from the start. For this, there is still no text sharper than Elizabeth Gurley Flynn’s Sabotage: The Conscious Withdrawal of the Worker’s Industrial Efficiency, written in 1916 and centering on the case of Frederick Sumner Boyd, who had advocated sabotage during the 1913 Paterson, NJ silk mill strike.

Like many of the texts of these years, part of its force lies in showing how sabotage is not just present in but constitutive of capitalism, an “internal, industrial process” that becomes visible in acts of “capitalist sabotage,” like letting vegetables rot in order to drive prices down. In that regard, “working-class sabotage” differs in that it “is distinctly social [and] aimed at the benefit of the many”—like oversalting already poisonous soup to spare the diners—rather than any fundamental quality of the act as subversive. Sabotage shows itself as fully multi­directional. It is not an operation with definite content but an exacerbated relation. Nowhere is this clearer than Flynn’s stress on adulteration, both of the material quality of goods and services and the quantity of value they manifest. It is on adulteration that Boyd’s arrest turned, not an unexpected addition—pissing in the dye, say—but a minute amplification of a process already demanded of the workers:

He advised the dyers to go into the dye houses and to use certain chemicals in the dyeing of the silk that would tend to make that silk unweavable. That sounded very terrible in the newspapers and very terrible in the court of law. But what neither the newspapers nor the courts of law have taken any cognizance of is that these chemicals are being used already in the dyeing of the silk.

What Boyd urged was a practice already in play called “dynamiting,” which consists of adding metal compounds to the silk so as to sell the same weight of fabric with less of the expensive material. His suggested sabotage marked only a slight tuning of the process, one that could be hidden in plain view, given that it involved no grand gestures and no external elements to finish ruining the material and make it unsellable.

But even this tactic of full ruination is not unique to rebellion. In a surprising passage of inversion, Flynn describes the experience of buying silk to make a dress, hanging it in a wardrobe, and taking it out later only to discover that it is not silk but “old tin cans and zinc and lead and things of that sort.” The adulteration—the capitalist sabotage—uses the delayed visibility of circulation to pass unremarked and hide its effects through distribution out into the world, just like the cop cars that, a week after leaving the factory, shudder themselves to pieces, with no one around to blame for the initially loosened bolts. It is in this way that sabotage marks what she calls that “fine thread of deviation,” the thin difference between sabotage of silk in the name of profit and sabotage of silk to spit in that name. To discern it comes to require a form of hyper-close reading, passing amongst anonymous materials and traces only legible to those cursed to deal with them daily – not just at the site of production but, like the one opening the closet, in the home, the market, the field, and the hospital, with watered milk, sawdust bread, pseudo-aspirin, and disintegrating fabric coming apart in their hands. The 1944 CIA field manual for teaching simple sabotage urges that,

The saboteur may have to reverse his thinking, and he should be told this in so many words … Once he is encouraged to think backwards about himself and the objects of his everyday life, the saboteur will see many opportunities in his immediate environment which cannot possibly be seen from a distance.

What it entirely misses is that such a capacity to be “backwards” resides in those systems and objects as already social, and that this doesn’t need to be told “in so many words”: it is amply evident to those who view it from within rather than from above. These reversals are the product of an intimate, almost bifocal thinking that sees both the tiny detail and the huge circuit enfolding it and uses this torqued perspective to anticipate the eventual consequences, suggesting a counterwork that knows how and when to stay in the shadows.

What’s striking about this is how much it refuses a politics of either exodus (the return to some lost human community and/or nature), separation (autonomy as class that can stand apart from the systems it is embedded in and fight from there), or representation (the negotiation of competing interests on the basis of displacement into single individuals or councils to stand in for many)—the three options that arguably constitute the majority of political thought, including of a radical bent. Instead, sabotage tends to suggest a form of inflection, one that sees the ground of its daily activity as a diachronic map and tremendous reserve of materials, aspects, and properties constantly contested and open to inversions. It suggests, in part, that we begin to treat that ground—the lived terrain of capitalism—as itself an enormous inhuman and self-drafting design project, both seemingly made for and by us, however viciously, and yet driven by principles and tendencies that can be assigned to no one, to no plan of action or authored project of accumulation.

To sabotage, then, means to let the negation vanish into that design, in a dissimulating mimicry of normal function that only shows itself as noise, turbulence, and a creeping sense that something is going on here. The failures it helps precipitate are posed unstably between malfunction and malevolence, and if there is bad work that’s been done, it doesn’t flow directly from the one who performs it as a job. It emerges instead from the way they activate what’s latently there in that tangled landscape, both distinctly human and unable to be reduced to that, what Amadeo Bordiga called “that modern forest of bayonets and chimneys.”

My last point, though, is that this isn’t a general condition about subjects and objects, an ontology oriented towards whatever predicate. It’s a historically specific complex that marks the parameters of how we engage and navigate each other and our spaces, and it’s based on the analogical interchange between the built and the born, the technical and the organic, and the abstract and concrete that forms one of capital’s most striking tendencies. What the idea of sabotage did, long before any talk of the anthropocene, was insist that this is the terrain we’re operating on, whether or not we would prefer it otherwise, and that this intimacy with its conditions of production opens up distinct chances for disturbing the social and physical structures upholding, perpetuating, and policing it.

In many ways, the increasing inability to discern between static objects and objects of surveillance, measurement, and feedback—i.e. the internet of things—suggests how familiar sabotage’s suspicious blurring of the intentional and the accidental will become, and we might well reread sabotage in this light, treating it as a prehistory of hacking. If we do, we still should keep in mind how its center never lay in the literally mechanical exploit itself, like a lathe ready to be dulled, but in the way that, as Silvia Federici puts it, “the human body and not the steam engine, and not even the clock, was the first machine developed by capitalism.” To insist on a capacity to amplify failure through tense networks of the poorly built and ceaselessly maintained requires that we see how we’re also talking about ourselves, about who maintains us and how, about how we are made both fragile and generic, about who has to live this most literally, about how we’re supposed to be make ourselves seen pretending to be otherwise.

In that CIA manual, we read how

anyone can break up a showing of an enemy propaganda film by putting two or three dozen large moths in a paper bag. Take the bag to the movies with you, put it on the floor in an empty section of the theater as you go in and leave it open. The moths will fly out and climb into the projector beam, so that the film will be obscured by fluttering shadows.

No matter its wretched source, it’s a telling emblem for the dense field of visibility, exploit, and stoppage where sabotage operates. Because in this case, the film gets obscured and halted by yoking together a set of otherwise functional properties: a projected film uses shadows to form shapes, and a projection requires light strong enough to cast the shadows forward over the heads of a crowd. As for the moths, it draws on their infamous phototactic tendency—to be drawn to light—and on a biotechnical glitch that itself speaks of a historical passage. The still-predominant theory of that phototaxis is “something of an evolutionary short circuit” that results from what happens when a mode of navigation based on the light of celestial objects finds itself in a world that has cancelled night and keeps the bulbs burning straight through, no longer unreachably far but close and hot. Beneath the junction of these properties, each of which serves its purpose, the theater goes dark, and no one can say for sure who is surprised and who knew all along, who leaves in frustration and who is busy gathering more moths in the moonlight. 


Reform School


Capitalists will constantly seek to reshape schooling because their labor supply can always be more efficient 

BY the time most public commentators are old enough to publish a book, they have put enough distance between themselves and their compulsory education that the particular ways in which it sucks are hard for them to recall. At 20, education reformer Nikhil Goyal is an exception to the rule. In his Schools on Trial he captures the particularities of a kid’s frustrations with rare vividness. A typical sentence: “In both prisons and schools, you are cut off from the rest of society, stripped of your basic freedoms and rights, like free speech and free press, told what to do all day, and surveilled dragnet style.” This description of compulsory schooling reads like heresy in print but will be immediately confirmed by anyone actually living it. School sucks, remember?

Whether or not school—with the sucking—is worth it isn’t usually a question for serious debate. We’ve come around to the idea that individual students shouldn’t have to experience psychological or physical bullying from their peers as part of their education, but there’s no amount of collective child dissatisfaction or unhappiness that could force adult policymakers to reconsider making kids go to school. It’s the bedrock of our democracy; you’re not supposed to enjoy it.

The common idea across most of the American political spectrum is that compulsory state-funded education is the liberal way to create a knowledgeable and engaged democratic citizenry. Without it, children would either be left to work or would never bother to educate themselves. The specter of illiterate future generations is invoked by both school reformers and defenders of the current system. Although there are many people within the public education system who believe in the noble goals of civic pedagogy, that’s not what America’s schools were built to do. Goyal argues convincingly that, before compulsory schooling, unenslaved Americans were not only extraordinarily well-read by international standards but widely covetous of learning. Compulsory schooling was not introduced to solve the problem of uneducated, unengaged, or unthinking masses. If anything, the opposite is closer to the truth.

In 1837, Horace Mann, the founder of American compulsory education, established the Massachusetts Board of Education, the first such agency and one which would become the model for the nation. But Mann didn’t want a more intellectually engaged population—literacy in the state already stood at 99 percent. Social control was a serious concern for Western elites after a series of failed revolutions, and Mann was very impressed by the system he saw on a visit to Prussia. He returned with a plan for public education.

“Compulsory schooling evangelists,” Goyal writes, “which included many industrialists and financiers, in fact, wanted to ‘dumb down’ the American population to create docile followers, not potentially troublesome freethinkers who questioned authority.” There’s no real controversy as to Mann’s intent or the founding ethics of the American education system, and its origins in German tyranny have been detailed by other critics of compulsory schooling like John Taylor Gatto and Jonathan Kozol. The system was profoundly anti-democratic by design. Whatever else it has become, compulsory education was originally built to produce a rigid class hierarchy of adult workers and ensure obedience to the Kaiser.

But the needs of elites change over time, and so must the schools. By the early 20th century, industrialists had become obsessed with the idea of efficiency and scientific management. Concerned as always with their labor source, the business community wanted to reshape the schools, but first they sought to undermine public confidence in the schools they already had. Goyal describes the first school reform movement this way: “The business community began its assault by bashing the state of public schooling, employing statistics on the ascending illiteracy rates, low student achievement, and the number of children who didn’t finish high school as evidence of failing schools.” Successful, they ported metrics like average achievement, work speed, and most importantly cost-per-pupil, into the discussion about pedagogy.

A century later, these sorts of metrics still rule American education. The amount of data produced by American students has increased, and they now measure themselves through ever more standardized tests. Still, the same class interests that created American compulsory education and reinvented it once are not satisfied. Here’s how Goyal describes today’s corporate education reform movement:

It is a movement being bankrolled by foundations, Wall Street hedge fund managers, other kinds of billionaires, advocacy groups, and think tanks. They want to send public education off to the guillotine. They champion a free-market, neoliberal orthodoxy, which includes closing schools, privatization, vouchers, charter schools, Common Core standards, high-stakes standardized testing, abolishing locally controlled and elected school boards, performance-based pay, and firing and admonishing teachers. They strive to profit off of schoolchildren and believe that schools should be run more like businesses and corporations.

In the light of the history Goyal lays out, this reform movement seems not so much a threat to the American public education system as very much in the tradition. The ruling class corporate reformers are a persistent feature, and they can be relied upon to come up with new ways to tailor (and Taylorize) education to fit their needs.

What are American public schools for? Despite as many different perspectives on the question as there are people who’ve passed through them, American public schools are for American economic progress in today’s global economy. If education was once meant to produce good Prussian monarchists, and later to battle the Soviets, these days the justification is an internationalized labor market. On the White House issue site for K-12 education, here’s how the President introduces the topic: “In today’s global economy, a high-quality education is no longer just a pathway to ­opportunity—it is a prerequisite for success. Because economic progress and educational achievement are inextricably linked, educating every American student to graduate from high school prepared for college and for a career is a national imperative.” It’s not so different from the mission of, say, the U.S. Chamber of Commerce’s Center for Capital Markets, which seeks to “advance America’s global leadership in capital formation.”

The Obama Administration cites four key objectives in its reform agenda, and they’re worth examining in detail because they map very well onto the larger corporate reform movement:

• Higher standards and better assessments that will prepare students to succeed in college and the workplace.

• Ambitious efforts to recruit, prepare, develop, and advance effective teachers and principals, especially in the classrooms where they are most needed.

• Smarter data systems to measure student growth and success, and help educators improve teaching and learning.

• New attention and a national effort to turn around our lowest-achieving schools.

Of course no one calls for reform to lower standards, but assessments designed to measure future workplace success are somewhat specific. This first plank ­establishes the direction for the following three. The second plank calls for “effective” teachers and principals where they’re most “needed.” Since efficacy and need are determined by the assessments and standards in the first plank, that means bringing staff who will produce more future workplace success to schools that aren’t producing enough. The data systems in plank three are an update to the scientific methods imposed on schools a century ago, to guide the path from assessments to standards. In plank four, schools that don’t meet the standards will be “turned around,” presumably so that they face achievement.

What exactly is “workplace success,” and can everyone achieve it? Here’s what President Obama told students at Wakefield High School in Arlington, Virginia, in 2009: “No matter what you want to do with your life—I guarantee that you’ll need an education to do it. You want to be a doctor, or a teacher, or a police officer? You want to be a nurse or an architect, a lawyer or a member of our military? You’re going to need a good education for every single one of those careers. You can’t drop out of school and just drop into a good job. You’ve got to work for it and train for it and learn for it.”

In this formula, the president implies that with hard work everyone can get a good job. This is the premise for a lot of public education rhetoric, and it is 100 percent false. It may be technically true that in the American system anyone can get a good job, but that doesn’t mean most people aren’t out of luck. Anyone can win the lottery, but everyone certainly can’t. America is still a class system, and by design, most people—no matter the average level of education or job skill—will have to sell their labor to property owners in order to feed and house themselves. Those property owners are the same people that have spent the past hundred years shaping the education system and scientifically reducing labor costs.

So which is it? Is the American public education system meant to increase average wages by training all students in job skills, or is it meant to decrease those same wages by providing employers with a glut of well-prepared potential hires? It can’t be both.

In the second half of Schools on Trial, Goyal looks at alternative schools that prioritize student freedom and self-determination. These anecdotes are a good antidote to the idea that our current schools are the only, best, or even a good way for kids to spend their time. These experimental schools are alternatives, but Goyal knows they’re not very realistic at a structural level. Public schools are held to increasingly rigid standards which are all geared toward workplace success. Private schools may have more freedom but can only be so broad in their impact, and any energy that goes into them doesn’t go into the public system.

The subtitle for Schools on Trial is very carefully worded: How Freedom and Creativity Can Fix Our Educational Malpractice. Goyal doesn’t say that freedom and creativity can fix our educational system because, upon close historical examination, our educational system is not broken—at least not any more than it’s supposed to be. Students can always be more effective future workers, and the enduring corporate education-reform movement and its lackeys in both political parties are always ready for a new push. At the end of the day, in a capitalist system, public education will produce wage laborers, and the American education system does a good job at producing the wage laborers that employers require. If it didn’t, employers would be forced to increase pay and train the skilled workers they need themselves.

Goyal thinks education should be about human flourishing, and it’s hard to disagree. But in the American economic system, flourishing is a question of competition. If everybody grew two feet taller I’d be better at basketball, but my odds of making the Knicks wouldn’t increase. Our national malpractice, as Goyal puts it, doesn’t begin or end with education. You can’t set children up to compete to exploit or be exploited for the rest of their lives and promote the values of joy and comradeship and learning at the same time. Luckily, America only needs one of the two.

In the middle of the Egyptian revolution, a video interview with an extraordinarily well-spoken 12-year-old named Ali Ahmed went viral. The boy spoke with a passionate intelligence that clearly outmatched pundits. Asked about the draft Constitution (which he had read closely online), Ahmed said, “I can beat my wife up and then tell you this is discipline. This is not discipline. This is abuse and insanity. All of this political process is void, because the parliament in the first place is void. Popularly and constitutionally void.”

The interviewer can’t believe what he’s hearing from a child. Schools don’t teach these kind of skills. His command of knowledge was colored by a sense of immediate responsibility, not just for himself but for his society, that we associate with adulthood:

Interviewer: Who taught you all this?

Ali Ahmed: I just know it.

Interviewer: How do you know it?

Ali Ahmed: I listen to people a lot and I use my own brain, plus I read newspapers, watch TV and search the Internet.

Interviewer: So you see that the country is not doing well and it has to change?

Ali Ahmed: You mean, politically or socially?

When President Obama talked to students about responsibility at Wakefield High, he told them, “at the end of the day, we can have the most dedicated teachers, the most supportive parents, and the best schools in the world—and none of it will matter unless all of you fulfill your responsibilities. Unless you show up to those schools; pay attention to those teachers; listen to your parents, grandparents and other adults; and put in the hard work it takes to succeed.”

But American success is not only a limited quantity; it’s a low standard. A nation of Ali Ahmeds might not be suited to produce the kind of workplace success the employers at the Chamber of Commerce have become accustomed to, but that’s because his education has pointed him toward more important concerns and much greater responsibilities. 


Political Vernaculars: Freedom and Love


New languages untethered to the state can help us imagine how we want to live with each other

“Now more than ever, we need the strength to love and dream.” —Robin D.G. Kelley, Freedom Dreams

“We will need writers who can remember freedom.” —Ursula K. Le Guin

Political vernaculars announce a conversation about politics: They are the words and phrases that assemble something experienced as the political and gather different groups around something marked as the political. They are the words and phrases that disassemble people around the political, as in “I prefer not to discuss politics.” They create attachments to the political, and they also distance us from something known as the political. They create possibilities for different ways of coming together—from short-lived experiments to long-term institution building—and they also impede how we form ourselves as we-formations, across the past, the present, the future, and all the in-between times marked by slow violence and prolonged dying.

In Kenya, impunity, corruption, negative ethnicity, graft, tribalism, development, dissident, blogger, land grabs, good governance, national security, and constitution are some of our political vernaculars. If we listen to the whispers, we might catch mass graves, torture, exile, disappearances, massacres, and rapes.

Kenyans know these terms as political. And readers of Ngugi wa Thiong’o will already have some framing of the vernacular: Vernaculars are “home” languages banished from colonial institutions, especially schools; they are anti-oppression tools used by those excluded from elite institutions; they are frames through which we apprehend the world, following Fanon; and they are ­practices for building community. In colonial and post-­independence Kenya, vernaculars were also framed as elementary languages—the languages taught in lower primary classes, up until standard 3 or 4, at which point English and Swahili were introduced as “more mature” languages. Vernaculars are ways of claiming and shaping space.

Vernaculars also discipline, producing habits and dispositions, ways of acting and feeling and thinking. Most of Kenya’s official political vernaculars—corruption, impunity, national security, for instance—are disciplinary. They name real issues, but they also manage how those issues are handled. One notes the repeated cycle: Identify an issue, call for investigations and firings, establish a commission, commission a report, then file the report in the graveyard of reports. Even those who are aware of how this cycle works—even those most critical of it—cannot imagine anything else. And thus, each new scandal enters the established cycle of the political vernacular.

Let me be more explicit: The processes set in motion by existing political vernaculars ultimately remain in the frame created by them. And the less effective these political vernaculars are at diagnosing and establishing processes that work, the more insistently they will be used, as though repetition will somehow break the frame. It will not.

Political vernaculars shape the conversations one can have. Say “corruption” in Kenya and all in attendance will proclaim it a terrible scourge; say “tribalism” and, depending on where we are and who we are with, some will call it terrible. Say “impunity” or “good governance” and the positions are already established, arguments in place, and emotions already arranged. This political is not a place where persuasion can happen, where positions can shift or co-imagining can take place.

Instead, Kenya’s dominant political vernaculars shepherd or funnel us into predictable ends, generating two related demands: that the bad thing stop and that the good thing continue.

Let me be a little more concrete.

“Corruption” is Kenya’s dominant political vernacular. For as long as I can remember, Kenyans have been discussing corruption, and for as long as I can remember, the conversation has focused on stopping corruption. Corruption is a bad thing. Cessation is the demand.

But what kind of demand is cessation? How is cessation related to ecocide, ethnocide, and genocide? Does cessation live in an adjacent neighborhood? Cessation trains imaginations and desires; cessation shepherds and funnels us toward predictable ends, the ending of a bad thing. We demand cessation all the more insistently, even as it fails to obtain the results we want. And the ever-proliferating sites of corruption and ever-multiplying demands for ­cessation—NGOs set up to tackle corruption, government institutions established to fight corruption, reports published on corruption, stalled court cases on corruption—become a closed, self-perpetuating system, feeding on itself.

One is unable to imagine beyond the thing that must be stopped. There is no “after” corruption. And this inability to imagine an “after” makes cessation the only possible demand, the only way to imagine a future.

Except that cessation does not produce futures.

Corruption is Kenya’s negative political vernacular; development is its opposite number, its positive. (I’ll simply note in passing that corruption often happens on development projects; the vernaculars do not allow this be noted in anything more than passing.) Development is a shepherding political vernacular, because development is what one cannot not want. Development captures ­imaginations—one is not permitted to think beyond, against, or beside development. But the failure of development projects—often through corruption—only leads to demands for more development projects, and quite often the same ones. Development in Kenya is tightly controlled: We must all want the state-created Vision 2030. Even critics of the state insist that their critiques are devoted to achieving Vision 2030.

As political vernaculars, corruption and development create frames and processes, ways of thinking, speaking, and acting. They act in concert to produce and restrict the demands that can be made. They shape the possibilities for what is thinkable. They flatten thinking into habits, repetitions, and negations.



“Well, what is your solution?” is a common response to political critique. It is not a vernacular; it is, in a way, an anti-vernacular.

“What is your solution?” masquerades as an invitation to participate in a public process, to take part in a collective process in which every voice matters. The “your” is supposed to be democratizing, removing barriers of age, education, and privilege; everyone is welcomed to provide solutions. But the invitation is disingenuous. It diminishes the importance of local, situated knowledges accumulated through experience, training, and research, as well as devaluing expertise gained through research and reflection. Institutional memory—memory from experience and practice and training and research—will be deemed unimportant: One is simply encouraged to provide a “solution,” no matter one’s knowledge base or training.

“What is your solution?” never asks “you” to consider institutional memory and never encourages “you” to imagine that it can think and act with others. Instead, this “you” is atomized, transformed into an isolated solution-­provider. “What is your solution?” refuses the possibilities of coalition and collective action.

“What is your solution?” tethers political possibilities to state imaginaries and practices, shepherding us into addressing the state on its own terms. One must learn the state’s languages and processes to engage it; one must learn to be legible on the state’s terms to engage the state. And this is what those asking “what is your solution?” are demanding: that one become fluent in and legible to state-tethered imaginaries. “What is your solution?” is the tethering mechanism, one that does not permit any thinking outside of state imaginaries. And your legibility before the state will be predicated on your fluency in state ­processes. You will be understood so long as you repeat that which has become habitual.

We need political vernaculars: We need terms that are widely understood and that we can use to build collectivities, to create sharable worlds, to make demands, and to name and fight injustice.

We also need political vernaculars untethered to state imaginaries.



As 2015 came to a close, I imagined a year-long project for 2016. Each month, I would propose a different kind of political vernacular. Over that month, the vernacular would become the occasion for a range of activities: Artists could create around it, teachers could encourage students to write essays and hold debates around it, mainstream media could be encouraged to discuss it. The plan was to saturate as many spaces as possible with that vernacular.

I don’t have any of this pull, of course, but one imagines what one can. While I still think we need different political vernaculars, I don’t know that we need twelve different ones. After a point, that can seem gimmicky. And over the past few years, I have kept returning to two political vernaculars: freedom and love.

It’s normal to save bibliographies for the end, but I must acknowledge, now, the people whose thinking subtends this writing: Claude McKay, Audre Lorde, Adrienne Rich, Georgia Douglas Johnson, Angelina Weld Grimké, Countee Cullen, Richard Bruce Nugent, Essex Hemphill, Joseph Beam, bell hooks, Christina Sharpe, Rinaldo Walcott, Dionne Brand, Katherine McKittrick, John Keene, Erica Hunt, Shailja Patel, Melvin Dixon, Wambui Mwangi, Aaron Bady, Kweli Jaoko, Thomas Holt, Fred Moten, Robin Kelley, Chris Taylor, James Baldwin, Mariame Kaba, Paolo Freire, Frantz Fanon, and my mother, who taught me how to imagine freedom.



Sometimes we drug ourselves with dreams of new ideas. The head will save us. The brain alone will set us free. But there are no new ideas still waiting in the wings to save us as women, as human. There are only old and forgotten ones, new combinations, extrapolations and recognitions from within ourselves, along with the renewed courage to try them out. And we must constantly encourage ourselves and each other to attempt the heretical actions our dreams imply and some of our old ideas disparage.

—Audre Lorde,“Poetry Is Not a Luxury”

Freedom and love are powerful vernaculars. To believe in them now might seem “heretical.” Perhaps because of how powerful such vernaculars are, the Kenyan state has spent the past few years attempting to tether freedom to a state imaginary. But freedom keeps escaping. It will not be tethered that way.

Love is a tarnished vernacular in Kenya: Those of us who grew up under Moi’s “Peace, Love, and Unity” philosophy learned to distrust how love was used as a disciplinary tool. It was our duty to love the president. Freedom and love arrive with complicated histories; privileging them requires “renewed courage.”

What can we do with freedom and love as political vernaculars? What can they do for us?



Freedom and love can help us imagine how we want to live with each other and what we want to build together. They allow us to imagine how we want to feel and act toward each other. And they allow us to imagine how our daily lives might change. To take a very concrete example, we can imagine that a Nairobi that privileged freedom and love would have safe streets for women and queers, no matter the time of day. One could be in public without facing harassment. One could enjoy being in public.

We must be able to imagine in very concrete ways what a world focused on freedom and love would be like.

In their simultaneity, freedom and love exist beyond what the state can tether. They push our imaginations in other directions. Against the state’s war against freedom—since freedom cannot be absolute—we can imagine freedom not as the right to violate, but as a way of being together: your freedom enhances mine. Love becomes central to imagining freedom in this way. We can imagine freedom away from international protocols that try to define and restrict the meanings and practices of freedom. We can imagine what lives devoted to pursuing and experiencing freedom and love would be like.



Freedom and love direct us to create. Keeping them as what we pursue enables us to shape our demands to those in power, demands that go beyond cessation. It’s useful to think of such demands as ways of creating and building.

Freedom and love can shape aesthetic practices. Toni Morrison’s Playing in the Dark has become important to me because she discusses lazy aesthetic practices, those well-worn ways of writing about the world “as is” that replicate, without questioning, all the ways those habits of worlding undo the human. The work of the imagination, she teaches, is something altogether different, something harder, more urgent, more interesting. What can we imagine and create once we stop relying on old, un-humaning tropes?

What kind of world can we create together?



Practice freedom. Practice love.

Freedom and love are doing words. They are we-forming, we-sustaining words. Their conjoined impulse is toward making collective living more possible and more pleasurable.

Asking “is this increasing freedom?” or “is this promoting love?” anchors and pushes other political vernaculars, reminding us what is at stake.






I have been using “we” and “us” to gather. Gathering is an act of the imagination—one never knows who reads one or if one will be read at all. One imagines that gathering might take place. Perhaps this is optimism.

I write this at a time when Kenya is caught by a security imagination, when state violations of privacy and rights are justified by invoking security. State violence is justified by invoking security. Critiques of the state are shut down in the name of security. Mainstream politics is conducted in the name of security, and it’s unclear if those of us trying to imagine other ways to be and to be together can imagine beyond the security imagination. It’s unclear if anyone can hear us.

Unfreedom is not abstract.

Critiques of the state are now muted. As we did under Moi, we whisper among friends and hope that none of them is working for the state. Our bodies are tense, our muscles clenched, our frown lines deeper, and our laughs are louder, brighter, edging toward the hysterical.

Enmeshed in this, we are trying to imagine it might be different.