O’ROURKE (London.) – Thanks, comrade. We are more proud of the comradeship of toilers like yourself than you can well imagine. It is such loyalty as yours that keeps us hopeful of our class and country.
CÚ CHULAINN (Dundalk.) – No! We do not believe that war is glorious, inspiring, or regenerating. We believe it to be hateful, damnable, and damning. And the present war upon Germany we believe to be a hell-inspired outrage. Any person, whether English, German, or Irish, who sings the praises of war is, in our opinion, a blithering idiot. But when a nation has been robbed it should strike back to recover her lost property. Ireland has been robbed of her freedom, and to recover it should strike swiftly and relentlessly, and in such a fashion as will put the fear of God in the hearts of all who connived at the robbery or its continuance. But do not let us have any more maudlin trash about the ‘glories of war’, or the ‘regenerative influence of war’, or the ‘sacred mission of the soldier’, or the ‘fertilising of all earth with the heroic blood of her children’, etc, etc. We are sick of it, the world is sick of it. And when combined with the cant about ‘patience’, and ‘waiting’, and the ‘folly of rashness’, and the ‘wisdom of caution’, and all the other phrases that are to be heard from the Irish eulogists of war we confess it gives us a feeling like sea-sickness – nausea.
No, friend! War is hell, but if freedom is on the farther side shall even hell be allowed to daunt us.
In a just world, a virus of tremendous scope and tenacity would ravage all the archives and vaults and shelves, all its servers and drives, its dens and libraries. It would draw no distinction between digital or analogue, bootleg or licensed. The CDC would be baffled:it appears to be crystalline in structure, yet its rate of replication is unprecedented… Pundits would lose their shit on air, terrified that the virus might mistake them for the already-recorded and snake their throat mid-speech. Amazon would go on full lockdown: nothing in, nothing out, its long-rumored drones circling on updrafts, training red dots on anything that moved. Other rumors abounding, like how a conspirator’s union of ex-Blockbuster execs and the remnants of local videostores were behind it all. Still, it would creep through plastic and code alike, invading mancaves and Netflix queues, no matter the firewalls or plastic sheeting or shotguns. We would be held in thrall, in disarray. But despite the fears of those who lie awake and hear it rifling through code and celluloid, its endgame would not be to spread to the human body. Its symptoms would be simple: whenever it encountered a reel or MP4 or DVD that contained The Help or The Butler or The Intouchables or The Legend of Bagger Vance or Get Hard or Hitch, it would consume all the data that makes them up and leave in its place Welcome II the Terrordome.
But if this was a just world, Welcome II the Terrordome would not exist. It would not be entirely necessary, which is what it is. Maybe that’s the case for any of those rare things that actually deserve to be called political film: they need to be seen, as often and by as many people as possible, but they exist precisely because the order of the world is posed in full against that possibility, its hackles up and Bagger Vances ever-ready for immediate deployment. They could only be about this world, unmistakably so, but it’s this world alone that they exist to ruin.
Who shall say that man does see or hear? He is such a hive and swarm of parasites that it is doubtful whether his body is not more theirs than his, and whether he is anything but another kind of ant-heap after all. May not man himself become a sort of parasite upon the machines? An affectionate machine-tickling aphid?
- Samuel Butler, Erewhon, 1872
Until five years ago, I only had a couple of intimate and tactile relationships with glassy surfaces. The first began early, a symptom of growing up where winter is long but inconstant, with temperatures that climb and drop hard. Ice always appeared less a thing than a pause, a literal freeze-frame, because in Maine you don’t get to step outside of the process. You don’t go through winter piecemeal, always in a full circuit. First, there’s the anticipation of snow, which can be huffed, a smell of dry in the throat’s back, like smoke but from burning metal, not wood. Then comes snow, sleet, tapering-off rain, slush, night’s black ice, partial melt, brown lumps of not-snow-but-not-ice, and then do it all over again, all on a long GIFish loop until you wake up and it’s late April and outside, all the permafrost dog shit has come to light and nose.
But when that pause happens fast enough, or with a slight interval of rain before night, you get the rare marvel of totally clear ice. We’d go skating, down the Royal River or out in the Cumberland marshes, where you could TIE Fighter swoop between the cattails and ash stands. I’d catch cracks and end up mouthing the milky pond, cutting my hands. Like an improbable crush or money, ice’s surface always split this way, pitched between pleasure and damage, never more dangerous than when you don’t realize you’re already navigating it, when it’s black on back roads.
When my grandmother’s second husband died, she moved from Florida to live near us. I became newly, sharply aware of just how fragile we can be when crossing ice, especially when our bodies have already begun to bend like birds. All the same, I’d carry slick loafers in my backpack to school and put them on for the downhill walk home, to spend it slipping as much as possible, acting out a semi-skilled pratfall teetering on the edge of facial reconstruction. The gestures we develop to negotiate ice barely manage this tension, always swerving between sublime grace and dorkly flailing. The cool kids didn’t wear jackets when it was cold – James H. never wore a parka, was structurally incapable of sleeves – but ice can still blow anyone’s cover.
Man Ray’s “Dust Breeding,” 1920 photo of the back of Duchamps’s The Bride Stripped Bare by Her Bachelors, Even (The Large Glass)
Given that I don’t wear glasses, my other memories of focus on, and care for, the smooth and unyielding are all about jobs. One was selling wine, where you spend so much of the day touching and holding glass. Stacking boxes of it, proffering $200 bottles of it to the rich like snake-oil. Lifting and arranging it, realizing that when vintners want their wine to appear to justify their prices – to demand what they fetch – they make the bottles heavier, as if value was something to be hefted and luxury sold by the pound.
For slick surfaces, the bottles gathered endless dust, or at least the dust appeared so visible because every speck violates an unspoken premise of wine fetishism. The wine can appear old, but only if it is wildly expensive, starts with Chateau, and conjurs cellar visions. If not, it must look as if it’s just on a layover, a minor lag before rushing off the shelves. This image of the agreed-upon guarantees a consensus of taste, which is what yuppies need as a backdrop to take flight from it. It lets them make shopping into a virtuosic labor of discovery, Indiana Jones on a Brunello kick, so misanthropically excited to prove to you that unlike the other schmucks quaffing whatever, they alone have memorized the precise numerical grade Robert Parker gave to a 2007 St. Supéry Napa Valley Estate “Elu”.
So all day I touched glass, but the glass was supposed to be invisible, only a problem – only visible – when too slick and itching to fall. It only worked when it could be ignored.
The other job with this surface experience was washing dishes at restaurants where I’ve cooked. It was the opposite of wine’s glass. Washing makes you fixate on trying, as fast as possible, to get a surface back to looking like it was never used in the first place. (Or just close enough that your boss won’t make you do it over.) This requires, and therefore develops, a strange hyper-sensitivity of fingers and palms, even as the heat, water, and bleach wreak havoc on hands, because you have to gauge without really looking: how much is still crusted on, how much texture there is to what is supposed to have none. It is hellishly boring work, barring the radio and talking and sometimes flirting, because going through the motions means learning a set of gestures repeated over and over again but which never get to be fully automatic, the rhythm broken every time burnt salmon skin just won’t let go.
Until five years ago, these were the only times I had significant, affectively-thick tactile experiences with glass or the glassy. But five years ago, I got my first phone with a touchscreen. Now, like most people I know, I touch, rub, tap, worry, flick, and stroke glass at least once an hour, almost every hour that I am awake, almost every day of the year. My days, and whatever intimacy they include, are inseparable from the feel of something that shivers, as if touched with ice, yet always touches the same way back. I fell in love from afar, finger-skating fast filth on a Foxconned slab.
Fingers found in a Google Books scan of an 1848 fiction collection from Maine
I have zero interest in either bemoaning or celebrating this, because it makes no sense to me to think it in terms of good or bad. Still, what is certain is that this transformation of experience – of a juncture of surfaces, signs, sight, and touch, a juncture crystallized around the touchscreen – is without precedent in human history, in terms of just how fast it has rewritten modes of gesture, reading, and seeing for just how many people.
There are, of course, other equally large shifts in humans’ technical-social experience. The use of buttons, for instance, in the typewriter brought with it a widespread experience of language as mechanically input, one previously available only via the typesetter or the telegraph and hence still requiring an “expert’s” mediation. The radio’s simultaneity, both from source and site of listening and between those sites, between the car, bar, home, and farm. The mechanization of war. Cinema’s animation of the still and reproduction of motion. The “whipping machine.” Mass-produced commodity markets. All online everything.
Harun Farocki, Eye/Machine II, 2002
Still, it seems right to assert that almost no other machine – not the phone itself but the set of gestures, textures, embedded memories and technical knowledge, metals and plastics, supply chains, work, and social forms that stitch together at the moment where it and I use each other – has so rapidly become inextricable from the everyday. The other drifts into use (and into the familiarity of being casually touched and unremarked upon) took decades, if not centuries, even if their effects were as dramatic in the long run. Indeed, the only others that happened close to this scale and velocity, across borders and populations, are all bound to war, to the sudden awareness of being part of a new machine – one named trench, carpet bombing, drone, gas attack, napalm,heat signature, or counter-insurgency – that will literally kill you if you don’t rapidly come to terms with how it functions. They remake the landscape and turn all maps 3-D, all neighborhoods into theaters of operation. And no matter how atrociously normalized that becomes, it can never approach the sleepy familiarity of a thumb that vaguely flicks a feed.
My interest doesn’t lie in the general anthropological overhaul bound up with the digital, for which the phone has come to serve as one particularly visible index. I wouldn’t know where to begin with that. Instead, I’ve become fixated on the sensation of a screen, on how it shifts from a sensational element in space – one that delimits the distance covered by projection or around which we gather – to a sensual object in itself. On how only a few years ago, I did not carry around a warm slab of receptive ice in my pocket but now don’t need to look down to trace patterns into and via it. Even more simply, in the way that so many of us spend so much time petting slick and smeary surfaces, carrying windows in our ass pockets.
But with this whiplash recalibration of sight and hand, to find precedence of how it feels to not only touch but also to hold and carry screens, to care for them to the point that I don’t think about them, I have to go back to what seems far from screens and their images, back to dishes, wine, and ice. Growing up with computers made a visual-practical relation to screens second nature, sure, but I learned to manually navigate Androids through winter and wages. Or through reading about Spinoza polishing lenses and realizing that even if there’s only one substance, there’s no reason that all of a body moves at the same speed. Or the way my friend polishes her glasses with a grey cloth when she’s thinking or tired or both. The way that Tonya Harding fell and then wept, in spite of those gold blades. The way that Anna Kavan writes winter in Ice, running up against the limits of description and so just writing white and ice and frozen over and over again. How the security glass of bank windows require blows of different speeds and type. Malaparte’s horses trapped in a lake’s prison of ice. Hundertwasser’s loopy, and totally correct, fantasy of cultivating mold on high-modernist glass houses. Dying in the ice worlds of Mario. Drawing dogs in condensation. Wiping dry erase boards with bare hands.
And, to some degree, through watching and reading American sci-fi. Though not as much as to be expected, because the touchscreen has an odd history in the decades before it became ubiquitous. Especially in speculative fiction fixated on the digital, ice and glass made famous appearances.
In William Gibson’s first novels, ice (or “ICE,” Intrusion Countermeasures Electronics) named the defense systems within cyberspace that had to be broken and evaded like frozen architecture, with the “worrying impression of solid fluidity” yet subject to becoming “the shards of a broken mirror.”
Hackers, on the other hand, took that “endless neon city-scape” and gave it a Tron-ish literalism, depicting information as physical construction and vice versa, mainframes as glass towers between which hackers can zip and maneuver.
But in Hackers, as in the Gibson novels (and other strains of cyberpunk), the ice and glass remained either virtual or a metaphor-made-architecture. When it comes to the apparatus for navigating info cities or server farms, it is a console, something unwieldy and manual that the “jockey” must plug into in order to pilot cyberspace like a glider (Gibson). Or it’s just a laptop you spray-painted with camo because you’re so bad-ass you don’t need to see what the keys say (Hackers and almost me, before I realized it was a terrible idea).
In the films I was watching in the ’90s and ’00s, ones actively concerned with feeling prescient, that split remained operative.
In Lawnmower Man (1992), the space is navigated from within the mind’s eye, as the climax of experimental drug treatment and with slighter grander dreams – becoming pure energy in a mainframe, controlling the world, etc – than expanded emojis. If anything, the film is especially intrigued by the way that these experiences are never purely digital. The god-to-be is not defeated by Wing Chun with trenchcoats or a proto-Matrix peeking behind the veil – that is, not by struggles inside VR. Instead, they just blow up the building where the mainframe is housed.
And when excitement and budgets didn’t get blown on CG to depict what the Internet looks in cross-section , they went instead for the virtuosity of managing multiple screens at once, via a generic hacker’s keyboard flutter, as in Swordfish.
As for starships, they held to a vaguely pilot vibe. For all the advanced tech in Independence Day, the alien craft is piloted by something held – a tremendous metal joystick – and analog enough to be patched into by Goldblum’s Dell. On TV shows, like Battlestar Galactica and Stargate SG-1, the screen-heavy spaces of the “bridge”/combat control room/etc continued to contain what one would expect:
namely, keyboards and monitors as distinct things. One of the prime reasons seems to be that like the Kinetocsope, the touchscreen is an interface built for one, never particularly efficient where one might want to show someone else what’s on the screen. (“As you can see, Commander, the enemy’s ships are approaching from…” “Can you move your enormous hand? Jesus.”)
There are, of course, exceptions. StarTrek: The Next Generation, for instance,got in on the touchscreen game early. That’s a rarity, though.
Total Recall, 1990
Total Recall, for instance, manages to envision something not just of Wii Tennis…
… but also, and more saliently, TSA screening lines. Yet in that screening scene, we notice that the keyboards of the future (just the desk itself) on which the agents clatter are not themselves the screen. That sits next to it, a jutting black and green monitor.
Total Recall, 2012
By the time of the remake, 22 years later, phone and tablet lessons had been learned, and the film has become obsessed with this other possibility. Its screens are now a) indistinct from architecture and b) ready/eager to be touched, using the palm as one’s biometric identification so that Colin Farrell can have Very Urgent Chats, a hand laid on the surface like the Plexiglass wall of a prison visiting room.
All in all, most of what I watched was uninterested in the feeling of screens. It was far more hyped about skipping over the messy fingerprint stage and going straight to the vaguely holographic, as if aware that there’s something endlessly throwback and manual about drunk-smearing burrito fingers across Grindr. The banal fact that touch leaves traces and that screens need cleaning just doesn’t seem very future. (I’m still holding out for a director’s cut of Iron Man where Tony Stark struggles for an hour to place a screen protector on his faceplate without trapping any dust under the plastic, but I’m not holding my breath.)
Minority Report, 2002
And so, in Minority Report, not only does Tom Cruise not touch the screen. (The germs, he cries, the germs.) He also wears special gloves, data prophylactics that let him keep his distance. The screen becomes the wall, yet the wall ceases to be something we might rub up against: set in space, yes, but as immaterial as possible.
The Iron Man films go even further with this, arraying OS windows in the holographic air, letting you crumple up represented paper without even needing glass in the room.
That gets pushed to its extreme in what was the most stylistically innovative show on television, CSI: Miami, for which the entire precinct is a Constructivist upchuck by way of Lisa Frank, where the built and the screened look actually identical. I imagine Horatio walking into walls all day, mistaking them for browser windows and pastel lens flares. So while while the entire fantasy of the show is of unlimited forensics, where no image is too poor to not be “cleaned up” into magically high resolution, no trace to miniscule to not bind it back to a body, the screens themselves just hover, neon and pure, as free of smudge and splatter as the agents’ white jeggings.
CSI: Miami ended its gloriously protracted decade in 2012. CSI: Cyber has just started this year, and it’s hard to say how long it will stick around. But even in its first episodes, one can see the mark of a different shift: the real existing ubiquity of touch screens means that for a show that’s nominally secular and set in the present, the world has to be full of screens, especially of the glass and touchable variety. The way these are depicted, though, is markedly split into a few variations, from which the full-bore holographic of Miami is largely absent, barring a showy digital autopsy scene that takes place in what looks like a motion capture studio that they didn’t bother to green screen.
First, those that cannot be touched whatsover, as they exist only for the viewer, not for the character, in House of Cards-esque floating panels, as in the episode where they moralize massively against Uber (under the name “Zogo”).
Second, in huge situation-room panels at the center of their office, where they can do military-grade FaceTime but, crucially, still control everything by the keyboards and mice at their desks.
Third, with so many hands, gesturing, tapping, pointing, and getting in each other’s way. Sometimes these hands are part of shots that try and approximate what it feels like to be absorbed in a screen, like the camera rotating around a stationary Arquette, staring at her tablet while snippets of text and voice flutter about. Other times, it’s just Van Der Beek – or a hand model – literally pointing our attention while explaining what a phishing scheme is. (Again, the show may not be long for this world.)
The same dynamic is also at work with more future-oriented shows and films. It’s not until the real existing daily use of touchscreens by millions of people that sci-fi grudgingly hoists them into view. And when it does, it reserves them for restricted and obvious uses.
1. Phones of the near-future (same as regular, just a bit more translucent, like in Robot and Frank).
2. Proxy for the medico-erotic.
3. Domestic/military logistics (the cup of tea resting on the touchscreen control panel that monitors both a blighted earth and a glass-bottomed swimming pool). Indeed, it’s that moment of screen-as-table that comes closest to the fantasy most of us really have about touchscreens. Not to have them vanish into space, not to become ever thinner until they wish themselves into immateriality, not to have them lilt bright around our heads, but to eat off them, to watch Hannibal through a vanishing mosaic of Hot Pockets and give the holographic Ghost of Commodity Future a very sloppy middle finger.
Both that urge – the slob screen – and the way in which most speculative films and TV tried to ignore it until it became unavoidably quotidian strikes me as telling, hinting toward an awkward proximity. For all the fantasies of the holographic and the talk of the immaterial, the fact remains that the digital stays rooted in the very physical, from the mining of rare earths to human mechanical turks, from the multiple hands of the compositor to the experience of touching glass. No matter what we do on these things that become inseparable extensions of the body and head – a phantom limb for every American! – they are nevertheless things that we hold and carry, leave at the bar and drop into the toilet. A dual, fractured physicality. Something worn and steady, always there and Jack-of-all-trades, slowly polished like sea stones in the hand over years and chirping beside the bed. An alarm, a flashlight, a last-resort vibrator. But also something tenuous and flighty, always in danger of being bobbled, juggled, and, finally, shattered.
They Live By Night, Nicholas Ray, 1948
Prior to touchscreen phones, there were probably only two significant interactions you could have with commodities through glass. You could go window shopping, entering a circuit of fantasy and denied fulfillment, becoming also an extension of the store itself, a flesh-and-blood ad’s image of rapt yearning. Or you could go looting. Those are the options: find some potential pleasure in being blocked by the glass or stop being blocked by it.
It’s in those terms that the opening of Joseph Lewis’ Gun Crazy (1950) is so smart. There, the lens of the camera and the “lens” of the shop window double up without telling us, as we’ve taken our position behind the looking glass before the film makes this clear with a reverse shot from the boy’s perspective. So the camera is already a security camera, opening up the possibility of not just the triangulation which psychoanalytical takes on cinema love so much – the apparatus, the gaze of the boy, the promise of the gun – but, much more importantly, the line of invisible demarcation that’s supposed to keep the gun on “our” side, the poor safely on the other. As is often the case, at least in actual history, it takes a rock to make it obvious where things stand, a rock to make clear that surveillance doesn’t stop when you turn your back on it.
With touchscreens, a new operation is possible: one can activate commodities, be they in-game powerups or Amazonian toilet paper, by pushing on another commodity layered with alkali-aluminosilicate glass, by pressing your face to the window. Still, despite the unbreakable and unscratchable promises of Gorilla Glass (a descendent of what Corning first developed as “Project Muscle” in the early ’60s), phones have a strange existence: they get broken, way beyond repair, and they keep getting used. Vic pointed this out to me: what else can be so busted, in terms of the promises on which it was sold (billions of pixels, the sheen of interactive glass), yet still be handled and used every day? Sure, keys fall of computers and we just hit the button below. Headphones die in one ear, and we rig them with twist-ties to get them to still play provided they are held in exactly the right position. Dissident mufflers get roped back into place.
Shards (in ad for repairing a screen built too fragile from the start)
Of course, the reasons not to replace screens are plentiful and obvious – mostly, that these things are so expensive to start, and few people can afford to put even more money toward them, even if they come to feel like a necessary element of how one navigates every day. Nevertheless, I’ve watched people embed shards and splinters of glass in their fingers, tiny motes of blood coming to the surface, because they keep swiping across a shattered surface that holds the cuts in place, bleeding because they had to reduce their pet’s stress level in Kawaii Pet Megu. When I smashed the hell out of my phone’s screen, bobbling it down a flight of cement stairs as it slid out of my pocket wrong, a screen protector kept the shards in position, laying a permanent spider web at the edge of everything I saw. But sometimes when I spoke, I’d find tiny crystals of broken glass dusting and cutting my ears, like a disaster had been whispering there.
Near the beginning of Alexander Kluge and Oscar Negt’s gargantuan, sprawling, and genuinely brilliant History and Obstinacy, an immediate concern is the difference between types of grasping. First, between the “crude grasp” (Rohgriff) of primitive accumulation – “Its particular grasp annihilates what is actually supposed to be accumulated,” as if Hulk tried to seize communal oil fields – and a finer one that “resembles a legislative machine,” self-regulating and perpetuating, not destroying what it snatches (p. 85). But they also mean “grasp” in a more literal sense:
“Of all the characteristics responsible for unifying the muscles and nerves, the brain, as well as the skin associatively with one another – in other words, for the human body’s feedback systems, its so-called rear view [Rücksicht] – the ability to distinguish between when to use power grips and precisions grips is the most significant evolutionary achievement. It is the foundation of our ability to maneuver ourselves, an ability that is most easily disrupted by external forces. These forces are also capable of disturbing our self-regulation. Self-regulation is the outcome of a dialectic between power grips and precision grips.” (p. 89)
With touchscreens, we grapple with a new language of gestures imposed wholly by external forces, gestures that signify only insofar as they are registered in code. But perhaps the strangest of these isn’t one bound up with the software, neither the erotic indifference of the left swipe or the fact that, in a rather generous act of anti-planning, the swipe-to-text function of my current phone obstinately avoids predictively spelling TOMORROW, offering instead TIMEPIECES, TINTYPE, TIPTOES, and TIMELESSNESS. No, the gesture that seems so alien to me, alien in just how natural it has become, is the maneuver of the sliding grip, a delicate oscillation between power and precision. Because despite being held by hands, iPhones and their competitors were never designed to be held. They are designed to be naked screens. Screens without hardware, ghost screens, guillotine-blade screens. They are only allowed to be held grudgingly, as a last resort, and we slowly fool ourselves into imagining that these sharp panes of ice feel good in the hand, let alone stable. But how do we hold something that isn’t supposed to be held, whose entire front is intended to be as smooth as possible? What would it be to grip a screen that’s only as thick as its image?
And so aside from learning how to type and swipe, we learn also that particular move of gripping glass while letting it slide. In bed, trying not to wake someone sleeping on our chest, we grope far out in the dark until we just barely feel that familiar smooth chunk, marked by feeling like nothing much at all. We slide it toward us with a minor flick of the middle finger, then pry it up between the thumb and forefinger, drawing the hand toward us with a gentle wrist’s whip, so that the phone is both held and turns. It rotates over a breakable fall, pinioned without screws by finger’s oils, and comes to rest in the palm. Then we look and see that it is still night and that a bot named “silkfeather_92″ liked a photo we took and that in the dialectic between power grips and precision grips, there are no winners.
That grip, a variety of which is also used when we pinch-draw the phone from a pocket and throw-slide it into the hand, is not just a grip. It is also a gesture, which to me means that it doesn’t signify or supplement anything. Rather, it stands uneasily between language and action, speaking of the limits of the former to make itself heard and the refusal of the latter to stop trying. It is an intermediary that makes it possible to see mediation, the machines at work behind the experience of communication, the fingerprints marring the surface.
If you asked me to say to you a list of all the words I know, I’d probably have the experience of knowing that there are words I’ve read, wrote, and said but that I simply cannot call to mind for the list, that I don’t remember that I know. In order to explain this to you, though, I would have to use words, and, in so doing, likely stumble onto some of the words I’d forgotten.
Perhaps that’s how it is with our hands of glass. To draw a balance sheet of what we’ve lost and gained, what forms of touch have been displaced or enhanced or scrapped because we can’t keep our hands off ice screens, we could only do so through moving our hands, passing them over things, tilting slabs, seeing what happens, what registers, what remembers. But being gestures, they can’t say or do anything other than depict the contours of the systems within which they might communicate, amongst ourselves and between our devices. From what I’ve read, it seems that those who design and manufacture these things, those who mine rare earths and seep thorium into water supplies those who drive workers to suicide – that is to say, we, we who are inseparable from these things – keep yearning for tougher and tougher glass, be it Gorilla or sapphire. The endpoint of that dream can only be a glass so hard that it could only be marked as ever having been touched by more of that glass itself. If that’s the case, at least I’ll finally know how to write TOMORROW on my phone.
[Note: I've been away from writing online for while doing research for a couple projects. One is an experimental documentary film, out this fall – more on that to come. Another is connected to a book coming out this winter from Repeater Books called Shard Cinema, an archaeology of contemporary moving images.
While writing that, I’ve been struck by something that critics, film theorists, and people talking in the hallways of the Regal Crossgates Stadium 18 after Jupiter Ascending tend to claim (and bemoan). Namely, that the increasing prevalence of digitally composited, animated, and/or hybrid images means a flight away from the “real world” into the immaterial. Even when they like the results – shirtless Channing Tatum gravity boot-rollerblading across the polished surface of whatever exists, who wouldn’t? – and even when they get that “immaterial labor” still means real living people sitting for days on end texture mapping the buildings for Big Hero 6, the doxa goes that the results, and especially the spaces we see, have lost their ability to capture, index, portray, or think through what really exists. Because, simply enough, they aren’t filmed but composited. Because there aren’t real waves, just Boris FX Native Filter Suite crunching the numbers. Because the wind that shakes the barley comes from an algorithm.
I’d argue the exact opposite of this. Coming to grips with recent years through a history and practice of animation – one centered on the construction of moving images, rather than the recording of the physical, fleshly, and gusty – shows that recent movies, games, artist’s video works, shows, and everything in between are uniquely thick with the social history of capitalism, especially the persistent legacies and cartography of colonialism. Moreover, they have started to reflect on and shape themselves around that fact, bringing the means of their making and all its echoes into plain view, provided we know how to get a good line of sight through all the lens flare and softly falling particles.
As noted, the key concept I’ve found to help wind through all this is animation. So what I’ll share here over the next weeks will keep winding around that, beginning, below, with a specific instance in early film history that I’ll eventually circle back around to. One note/warning: tracking out this research won’t take the shape of cogent essays that tie themselves into neat bows by the end of each segment. More of a continually unraveling fabric divided into more readable chunks.]
A kinetoscope is an innocent looking piece of machinery
- “Reformed by a Picture,” from the inmate-written magazine of Sing Sing prison, 1901
As far as historians of early film can agree, cinema’s first special effect is used to depict its first execution – and its first death. It comes in an Edison production, The Execution of Mary, Queen of Scots, from 1895, early years for what wasn’t yet a coherent industry, just a nascent array of competing technologies and their accompanying entrepreneurs.
Both the film and its special effect are simple enough, as each functions to show just what the title promises. We will see an execution, a public decapitation specifically. That hadn’t actually happened to a flesh-and-blood British aristo since Simon Fraser (the 11th Lord Lovat, for those counting) in 1747 or on US soil since 1811, when more than 21 slaves were beheaded following a show trial (and more than 100 in the massacre beforehand), and their heads stuck on pikes as warning to future insurgents, for their role in the German Coast Uprising in what later became Louisiana. However, not just capital punishment but beheading in particular remained widespread in attempts to quell revolt, in part because of its deep symbolic weight, transposing decapitation from arguably the major signifier of 17th and 18th-century regicide – the head of the head of state… – to the arsenal of colonial retribution. This continued straight through the formation of Atlantic power, even if, in the judgement of some of the colonial planters and bureaucrats themselves who ordered the acts, ‘‘such exercises in frightfulness proved of doubtful value.”
What that history poses starkly is something we know all too well: that any attempt to draw a separation between this violence and how it is represented forgets that it already is both image and act, drawing a portrait of how the world is to the powers marshaling it. To look back across centuries of efforts to police dissent before, during, and after it makes itself visible reveals not just theses acts of killing, but also their display, serving as posthumous shaming, hypothetical warning, and, crucially, diversion for those who felt unimplicated. It treats the murder of slaves, rebels, and the poor as something for public consumption and as what would become, in the language of early cinema, an attraction, a self-same spectacle.
Coming to grips with the uncanny resonance between early cinema and the screen culture we live now has to include not just the kitten video – see 1901′s Sick Kitten – but also media spectacles of execution, above all the “lynching film,” already present in Edison’s 1895 Frontier Scene and only amplifying from there well into 1910s. The medium of film found an extant history of displaying the dead to remind the living what they should expect, and the coalescing industry of cinema wasted no time continuing this practice.
So in a film from the last years of the nineteenth century, a head will be severed from its neck before our eyes, although it will as sanitized and distanced as could be: a white head, and a royal one no less, from another country, from another century. Like so many of the shorts from the first years of recording moving images, the film will do little beyond support this central attraction. It is a gag’s small vessel, a container shaped around the thing it aims to do. Which, in this case, means a cut, both technical and represented, centered smack-dab in the middle of its 13 second running time, bisecting the film’s duration as plainly as its splits the body that kneels in the frame’s center. The film hinges on this, its atrocious binary switch: head on, head off.
At first, though, Mary is not kneeling. She is standing when the film starts, facing to our left. To her right are two lines of men, arrayed to face her and, therefore us. Except it is perhaps wrong to say us, because the film was meant for a Kinetoscope, which initially allowed only a single viewer at a time, one who must bend her head down to peep in, baring the back of the neck in echo of what is watched.
(This direct and directed physical parallel of watcher and watched persists.
Recent large-scale cinema has become obsessed with providing us images of astonished viewers as mirror portraits, as though to slowly goad us into matching their gape. As if well-aware that what we’re shown isn’t genuinely sublime, at least not in the way that its characters seem to be feeling it, just unprocessable in the sheer screen data and the scope of the labor behind it.)
The men stand up straight, though. With the exception of the executioner himself, they seem continually unsure what to do, what they are there for anyway. (To watch, evidently, to remind us that this is worth watching.) They spend most of the film raising and lowering weapons in a gestural sympathy with the axe, in contradistinction to how the Kinetoscope viewer must adopt the victim’s curved spine. None of these men seem more superfluous than the second executioner, who is, we suppose, a back-up or reserve for case of emergency, a pinch killer. He does even less than the others and never raises a hand or a weapon. Once the deed is done, he bends down and stares at the severed neck like it had something else to say or, like a Kinetoscope, had a tiny picture galloping along inside it, an instant replay.
The whole layout is geometric, less theater than layers of planes, and so Mary, who is played by a man named Robert Thomae, is also a shape in a row with others: a stump and a woman behind her who knots the blindfold. On my third viewing, I realize that the blindfold is already on, even if not tied, when the film starts. Cinema’s first execution is not seen by the executed. That’s what we are there for.
Mary kneels. The axe raises all the way up and behind the back of the executioner, before it falls, slowly, almost drowsily, to cut the head from the body in one fell swoop. The head rolls away, rolls back a bit. No matter how many times I’ve seen this, I still shudder a little each time, no matter that the effect is “crude” – or, more likely, exactly because of that.
Méliès the GIF
As for the effect itself, that stop trick/splice substitution – in this case, stopping the camera and swapping out a beheadable dummy for the live actor – comes to form the star technique of more famous films from George Méliès, Cecil Hepworth, and others in immediately following years. The operator stops the camera mid-action, anyone in the shot freezes, and some element is changed, moved, or removed before the camera starts again, picking up from the next frame. When watched at speed, an object will disappear, transform, or appear without a transition, which is evidently what we often mean by “like magic,” as if gradualism promises realism.
Only a few decades later, though, special effects will start to get criticized when their transitions are too sharp, when technique can’t smooth its rough patches and the gaps between difference are too wide.
No wonder that Terminator 2‘s T-1000 had to move like wet chrome, because liquid makes a promise of change without frames or leaps. That liquid is both the ultimate slapstick material, already a gag – Arnold punching the head, unaware that it can just morph into a hand – and never very funny, without enough stubbornness to build a good joke around. Just an endlessly self-repairing and humorless cop. Except, of course, for when it gets frozen and, in trying to walk, breaks itself again and again, like a glass horse that doesn’t know its own strength.
The Haunted Castle
Méliès himself claims that he discovered the technique accidentally. Filming a city street, his camera jammed a moment before starting again. When he prints and projects the film, “having joined the break, I suddenly saw an omnibus changed into a hearse and men into women.” He first uses it intentionally in The Vanishing Lady (1896), whose title says it all, and by the winter of that same year, in The Haunted Castle. There, it turns a dangling bat into a battish man, complete with a small puff of smoke, thereby inaugurating a cinematic line that found itself, exactly one century later, watching Quentin Tarantino arrange to get himself bled to death by Salma Hayek as a vampire stripper.
The decapitation and the vampiric metamorphosis are enabled by the same technique, the stop trick, but they work to opposite ends. In Execution, it shows two distinct bodies – one human, one inhuman; one alive, one neither alive nor dead – as identical, joining them together within a feigned unity of time through a split as invisible as possible. The effect aims to shroud itself in order to direct attention to bigger tricks: to make a third body, that of the dead, or, in films to come, to advance a story by piecing together camera positions and locations. In The Haunted Castle, though, it replaces one body (a bat) with what is blatantly a different one (a man). It doesn’t hide its cut but brandishes it, under the sign of the supernatural and the joy of the trick itself.
What joins them a sense of time both seen in the films and seething behind them. That sense derives from one of the most basic properties of film, and later analog and digital video: the time of recording does not have to be identical to the time of watching. There can be a gap, a crack or yawning.
The most extreme example is stop-motion animation, including hand-drawn cartoons, where there is no “natural” motion whatsoever. Any movement we see has to be built out of a set of stills, not in the way that all film/mechanical cameras divide motion into discrete photos but in the more extreme sense of laboriously constructing, over hours, days, and weeks, a series of minute differences that aim to vanish into a unified sweep of seconds. In a sense, this split belongs to an extremely old division, one drawn by Aristotle in Physics: between “violent motion” (things “whose motion is violent and unnatural are moved by something, and something other than themselves”) and “natural motion” (those that “derive their motion from themselves”). Animation, in this schema, would be the pinnacle of violent motion, even more than films in general which require the projector – “something other than themselves” – to allow us to process an illusion of motion. With animation, every single gesture is pre-loaded with other gestures, a nest of movements and exertion crystallized into it. A week of work of many gets sunk into the Coyote falling to his non-death again, hole-punching a silhouette through each smog cloud as he goes. No wonder cartoons get criticized as violent.
The same is true of the effect at work in Execution. To make the fall of an axe appear seamless in time, there had to be an entire other set of movements and efforts. Thomae stands up and hustles out of frame, pulling the blindfold off first. Everyone else holds still as possible, the axe wavering a bit, and uncredited persons drag a dummy into frame. I can only imagine that people laughed, like whenever we run to set ourselves up as a tableau in front of a self-timer, and they were surely told to be serious, this being an execution and all, and they laughed harder.
The Fiat Ducato being assembled at the SevelSud factory in Val Di Sangro
At the broadest, we could say that every cut, in camera or in post-production, describes a version of this same split in time: between two scenes/locations within the film, sure, but also between a time of mechanical advance and a time of everything else that makes that advance go and provides its materials. It’s like what Romano Alquati heard in his interviews with machinists at the Fiat factories. Automation for them hardly meant deskilling or relaxation but rather a constant and anxious effort to tweak, repair, supplement, and route around automation in order for the allegedly “automatic” work to look like it actually worked – that is, to become an image of what was supposed to need none of that.
It Came From Beneath the Sea, 1955
This technical capacity (to stop the camera, and so to have the time of recording and time of watching differ) is an extremely obvious one. So much so that it wouldn’t be worth dwelling on, were it not for the fact that so many of the following century’s moving images, and the efforts to talk and write about them, come to be structured the idea that “naturally”(albeit via mechanical processes), those two times – recording and viewing – should be identical. That an attempt to capture a “living” temporal succession is the special province, if not moral duty, of mechanically-recorded motion, and that any deviations from this are therefore special effects. In short, that despite a few allowable creative dilations (for things like dream sequences, or vampires, or earthquakes, or, as I prefer, some combination of the three), things should come before our eyes as they came before a camera, with a minimum of distractions or Vaseline between each.
But to call something a special effect is no more a neutral designation than to call it natural. It draws a line in the sand, traces it over and over again until it becomes a trench, a fact of the historical landscape that our thinking comes to shape around. We can see this with stereoscopic (3D) film, which gets recurrently posed as something extra, a supplement or cheap trick that demands that a literally intervening layer of red/blue or polarized plastic. But the stereoscopic is not a technical exception or latecomer.
It was being experimented with and developed from the 1890s on, from before the Lumière workers left the factory (and kept sneaking glances up at the camera) and the train left Ciotat. It is a historical exception, one that appears out of joint only insofar as it measures the inseparability of moving images from the movement of capital. Because for that tight bond, 3D has only functioned as an occasional spur to help goad viewers back to media whose centrality is uncertain and whose profitability is hemorrhaging.
Similarly, to call the beheading in Execution a “special effect” is only to make it retroactively so, to align it with a certain tendency in screen culture that came to dominate economically. That tendency, beginning in the first decades of the twentieth century, is for commercially-produced moving images to become increasingly, if not exclusively, committed to showing bodies do things with and to each other at velocities we learn to think of as human. Yet what falls under that baggy category gets equally restricted, especially in terms of a dual space.
First, as screen space that’s overwhelmingly treated as a coherent volume, where dissolves, overlays, and anything else that might reminds us how screens are not pools but textures will be restricted to title sequences, witches’ spells, scenes of nervous excitation, or films talked about as art.
Second, as a space and time of viewing that encourages paying attention to the film as if it was a text, specifically, a plotted work in line with the bourgeois novel. Fitting, given the increasing organization and marketing of films, from the early 1910s on, around the principle of a story, a plot by which an individual’s progress can be charted rather than a field of collisions and affects. That effort, in line with the attempt to create tiered class systems of viewing capable of securing spaces where middle class viewers didn’t have to mix and mingle with the nickelodeon mobs, required real concrete shifts, like raising ticket prices, stopping the booze, and eventually assigning specific start times for a film (breaking the film off from the flow of images to declare it a unitary, complete thing.) Thankfully, none of this ever worked like it was supposed to.
But one of the side effects was to declare any disruption of that naturalized rate of motion, in which what you see feels like what the camera saw, to be a “special effect”: as something that’s endlessly present and always at work behind the screen, but that becomes only visible as an exception. What conveniently got hid along with this sense of time was the sense not just of the human labor embedded in it but the inextricability of that labor from the mechanisms and systems it used: that is, of the violence of its motion. It’s been cyborg cinema from the start, from long before green screens and motion capture and fluid effects, before a robotic cop gets frozen, shot, and shattered into a million little pieces without anyone having to yell cut.
In the midst of the nearly universal denunciation of the Charlie Hebdo killing, an ethics and worldview so generic as to approach being simply “The West”* sought to bolster its already secure position. The basic mechanism was plain as can be: appeal to long and cherry-picked historical precedent (Voltaire, the Rights of Man, constitutions French, American, and otherwise, etc), complete with the usual talk of pens and swords, albeit weirder than usual, with the former made comically huge and carried aloft as though literally the latter.
(Note: this, and what follows, isn’t to speculate on the reasons that 4 million people came out onto the streets, in ways that obviously can’t be reduced to them being “duped” or “manipulated.” That’s a question I won’t even begin to approach, being neither there on those days nor living in France generally. This long text on libcom, just translated, tackles that at length, especially in terms of the notion of the citizen. My concern, instead, is how a certain world view, one pushed both by states and in the media, has attempted to frame and give image to a situation that far exceeds it and, by so doing, reinforce its position.)
It’s unsurprising, for instance, that Albert Uderzo, creator of Asterix, came out of retirement to draw his support for Charlie: what emblem more fitting for the entire discourse than a 20th-century cartoon of an ancient plucky Gallic warrior fighting off the Romans, i.e. a martial empire that, in Asterix, doesn’t know how to take or recognize a joke. Yet in Uderzo’s new drawing, the shoes from which the enemy has been punched free are not Roman sandals but babouches – heel-less slippers unmistakably coded in France as North African. The symbolism is so overt that it barely counts as symbolic, just the direct expression of a sneering imperial whimsy. Asterix, defender of old Gaul, comes back from the mythical past to expel the clear and present threat to French liberté, égalité, and fraternité: the descendants of those whose enslavement and colonization so profited France, from before it declared the rights of man in 1789 to when it ratified the Constitution of the Fifth Republic in 1958, that it literally forms the basis of its wealth and geopolitical clout today.
France, the plucky historical underdog…
… Romans as the faceless, humorless enemy, with shields as veils
Even aside from giant foam pencils getting waved about like a NHL game costumed by Max Ernst, the entire affair has bordered on the surreal, with all the fitting marks of a late-stage and freaked-out colonial power. For behind all that talk of unity lies a manic and fractious instability, as the state and its mouthpieces, official and otherwise, swerve between pulling punches one moment (“it was all just satire, we swear, they mock all religions equally, they only drew Taubira as a monkey because that’s what Front National says”) and lashing out the next (up to 100 arrests and counting, for violations of free speech by those who sympathized with the attack, or with the anger behind it). All, of course, underwritten by continual recourse to levels of self-caricature (about that transhistorical French spirit) that one expects more from Fox and a seriously surging and actually frightening brand of white ethnonationalism that would make Fox proud.
But just prior to this, the words free speech were getting thrown around with the same frequency and conveniently sloppy ease for another situation dominating the American news cycle, that of The Interview. The film itself is, of course, just the n-th iteration of Appatow-Dugan-Stoller bromantic banality. It’s replete with spite for any humans other than its chosen few, which primarily means straight white guys so consumed with a deep and unabiding horror of any real existing queerness that they set up a blind of fake transgression – oh God, what would it mean for two men to care deeply about each other?, oh God, we do care, we do! as if that wasn’t the plot-line of a good half of all American movie and TV production – while quite literally cramming missiles up their asses. And so it spits in the cake of others and eats it too. The only salient difference from the previous offerings of this ilk is that this film is actually, rather than just implicitly, supported by the State Department.
What happened needs no rehashing, other than to note just how sacrosanct and widespread was the idea that the film’s embattled release had everything to do with freedom of speech, that this freedom is a right at the heart of American experience, and that it must be protected – which essentially means helping cover the losses of an enormous multinational by paying to see the film. An editorial from The Washington Post sums up the basic position: “Freedom of speech and freedom of expression are hallmarks of American life, and we must jealously guard these values from both internal and external threats.” But it wasn’t limited to conservative rags, as even The Onion‘s AV Club, normally at least semi-intelligent, hails it as a “triumph of free speech” – even if it notes that it doesn’t make its satire particularly good.
What went missing through all of this, though, was any recognition of a full-scale category error at work, itself a product of a slow historical erasure that renders the phrase free speech at best null and void, at worst a tool of those from whom speech is supposed to be protected. This error can be seen in a simple fact. Thanks in part to the Sony hacks, we know just how much the entire apparatus of The Interview cost. $44 million production (including $8.4 million for Rogen and $6.5 million for Franco), plus $35 million domestic marketing budget, plus $12 million foreign marketing budget. In short, this alleged triumph of subversive expression, the plucky underdog uncowed by pressure foreign and domestic, cost $91 million to produce, with the hope, of course, that it would earn global returns far beyond this. (Indeed, its stunted release is by no means a threat to this: it’s already hit $40 million in online sales and streaming, making it Sony’s biggest online release by more than 400%, as the #2, Snowpiercer, came in at $8.2 million. Besides, given how shitty the film is, the scandal was the best thing that could have happened to it, expanding its nervous titter into a public cause.)
To get a sense of this scale, $91 million dollars also happens to be the price of:
One Lockheed Martin F35 Lightning II fighter jet (with a million or three to spare), provided that production hits expected capacity in 2018.
The Chelsea Football Club, at least when Roman Abramovich bought the team in 2003.
One of Laksmi Mittal’s mansions on Billionaire’s Row in London where Abramovich also lives.
The amount that Suntech Power Chairman/CEO Dr. Zhengrong Shi made from the $290 million market value leap in Suntech Power stocks after the company announced its plan to invest $10 million to build its first solar factory in the US.
Secil (Companhia de Cimentos do Lobito) Cement company’s investment in a cement and clinker factory constructed in Angola’s Benguela province and which has the production capacity of 600,000 tons of cement per year.
The seven-year contract that kept Mike Piazza at the Mets.
The amount that Chicago’s Northern Builders Inc. made by selling six suburban industrial buildings comprising a 1.4 million-square-foot portfolio to Hillwood Development Co.
The 2009 auction cost of Titian’s Diana and Acteon.
The sale to Kimco Realty Corp of the Crossroads Plaza shopping center in Cary, NC, which is 670,000-square-feet and includes more than 60 restaurants and stores, including Best Buy, Dick’s Sporting Goods, Toys ‘R’ Us, Old Navy, and Marshalls.
The aquisition cost of Enfield-based New England Bancshares Inc. by United Bank a year ago.
$24.9 million more than the 2015 $66.1 operating budget of the city where I live along with 50,000 other people, a food desert whose planners dream of gentrification and where stretched-thin resources mean it takes more than 1 hour to commute by bus to a city only 14 minutes away by car.
The point is that things that cost $91 million are not speech acts, free or otherwise. They are mergers and contracts and weapons, the rarest of old commodities and the new fortresses which hold them. They are facts of industry and territory, the Mets and the metropolis. They can only be, because that level of coordination and extraction make them the property and extension of states, corporations, or individuals with so much amassed wealth that they may as well be states or corporations, even if they prefer to call themselves collectors or philanthropists or James Franco.
$91 million things are ventures and investments, acts of war against what could never be worth that much. They do not need to be protected. They are what we need to protect ourselves against, future-shaping dreadnoughts of force that structurally cannot opine or express themselves, and especially not against injustice. They just do. And no matter what they say, or who does what to whom with what kind of missile in what setting, the only opinion they actually hold and send out “through any media and regardless of frontiers” is in the name of the disastrous continuity of what already is and the further extension of that across and into whatever small corners of existence remain at odds with it.
To imagine that The Interview has anything to do with free speech wrongly imagines that it has anything to add to the world other than a recitation of the current state of American empire. It’s the Pledge of Allegiance wreathed in cut-rate dick and weed jokes. Only those who consider a factory or a football club or a shopping mall or a fighter jet to be an unalienable right – in short, only those who see property as an expression of individuals, rather than the historically-produced category that underwrites such a notion of individuals – can hold this position of it as being free speech. It is, not coincidentally, the default position of a long moment where conditions of global wealth that support it make it more unstable and volatile, just as the idea of national unity channels a nostalgic call for a socially-secure and homogeneous national composition that simply doesn’t exist.
In this regard, the link between Charlie Hebdo and The Interview has less to do than it seems with satire, the limits of humor, and whether one should be killed for a cartoon. (No, but there are so many other things for which people should not be killed but are, like being black in America, that it adds little specificity. Moreover, the category of offense/being offended misses the point so often, given that “equal opportunity offenders” doesn’t mean as much when some of those targets of satire – say, North Africans living in France – are recurrently targeted by police in much more literal ways.) Instead, they’re linked by something else that goes missing amongst all the talk of free speech, a simpler double question: protection from whom or what? protection by whom or what? There is a tremendous difference between, for instance, demanding the state protect your right to vilify those who that state actually kills overseas and targets at home, and demanding that the state not beat, kill, or incarcerate those who express opinions hostile to the predominant situation. The former is not actually a demand, just an exhortation for things to remain constant, if not to return to “how they were” for a certain segment of the population. And the latter is a not a demand that can be answered by the state, because it is a challenge to it, one that people work tirelessly to raise and raise again.
Nearly four decades ago, Serge Daney wrote that “All films are political films,” by which he meant that there’s just no such thing as neutrality. Everything that is made, whether on the cheap with stolen equipment and images or for literally hundreds of millions of dollars, is partisan. The question is partisan for what, in defense of which grasp of the world, on the shoulders or at the throat of whom. It seems to me that any real engagement with cultural production of any variety, an engagement able to be both rigorous and reckless, subtle and furious, deserves to start with that question, and with taking jokes too seriously and stone-faced solemnity as the farce that it is. Where it goes from there strays into questions of method, which open conversations worth having again and again far outside academic journals, far beyond paywalls and gallery walls, and far outside some delimited terrain of “culture” picked over by ideology critique. It likely means working to see something like the cinema, for instance, not as a collection of films but as an enormous and contentious circuit in which the films, and especially their plots and how it “codes politically,” are only one tiny moment.
But those are a longer and thornier questions, ones that should be conversations than a monologue. What seems clear enough, looking back over the last months, is something that has been said again and again over the last century. That to fully recognize the contours and history and projects of things that cost $91 million will require leaving behind, together, a form of criticism where we comment on these juggernauts as if they were speech and where we act as they were the expressions of individuals who might listen when we tell them that Avatar is imperialist bullshit. Or, equally, where we think piece, where we pretend that they smoothly and conveniently translate a hidden logic of the period into narratively-coherent form.
In place of all that, we need forms of critique that are actually inseparable from our efforts to develop communities of care and struggle, where the point of criticism isn’t the thing to be decoded (or to be rewarded for doing so) but the development of actually popular cultures that will never be worth $91 million and the amplification of what otherwise goes unheard. I think we already have these, if not as forms than as instances and moments, but they often go unrecognized because they just don’t align with what we have been taught – and perhaps what we teach, when we’re lazy – that criticism means. These kind of critiques alone might be able to help withstand the present, which means, in no small part, destroying the notion that money is an opinion to be expressed freely.
Subscribe to The New Inquiry's Newsletter to stay up to date on new