twitter
facebook twitter tumblr newsletter
 

Beyond MLK

memorialMLK

“Basically your ministers are not people who go in for decisions on the part of people, I don’t know whether you realize it or not…they had been looked upon as saviors.” – Ella Baker

“King was assigned to us by the white power structure, and we took him.” – John Alfred Willams

The legend of Martin Luther King Jr. looms larger than usual this winter, even though it’s every January that we celebrate his birthday. One reason, obviously, is that there‘s a new Hollywood film out about him, which, while snubbed by the Oscars, has been embraced at the White House. The other reason is that the wave of black resistance sweeping the country today is often characterized as “a new civil rights movement,” and King—we are told—was the supreme leader of the civil rights movement.

However unfair the Oscar snub (whatever its faults, the film is a hell of a lot better, both historically and cinematically, than American Sniper) the most interesting argument so far about Ava DuVernay’s Selma remains the controversy over the relationship between King and President Lyndon Baines Johnson. Former LBJ advisor Joseph Califano has publicly argued that King and Johnson were not at odds during the Selma campaign as the movie depicts, but that the African-American leader followed Johnson’s encouragement to nonviolently dramatize the obstacles that blacks had to voting in the South. The filmmaker shot back that this was “offensive to SNCC, SCLC and black citizens who made it so.” (the acronyms refer to civil rights organizations the Student Nonviolent Coordinating Committee, and the Southern Christian Leadership Conference, respectively). But Califano’s assertion has gained traction because there‘s more than a grain of truth in it.

Continue Reading
 

Otherwise Movements

other383

Calls for “strategy” in protest are used to marshal in the dominant framework when we seek otherwise.

There was a time when, as a child, I thought the magic of color television was more than just technology. During those confusing young years when I carried around a blanket while sucking my right forefinger—the light blue one; my nickname was Linus to some people—I thought the world at one point was grayscale like television itself, that color needed invention not just for television but for the world. I thought it only recently that the world was “live and in Technicolor.” It made me incredibly sad to think of my parents having to exist in a world of grayscale. This thought about grayscale and color was made possible as a child, I now recall, because The Wizard of Oz always confused me. And it was because of its teetering on grayscale and color, its refusal to value either as more urgent, more desirous.

Over the rainbow was where Dorothy encountered poppies that made her sleepy, and ruby red shoes that she only used to go back to Kansas, and witches both good and wicked. But it was in grayscale that she found family and returned to again, family and friends loved and cherished. Brilliance or monochromatism, neither was more celebratory than the other. The Wizard of Oz never decided for viewers which world was real, which fake, which preferred. And it is in the space of such a relay between color and grayscale that a critique of the normative world is made possible. That is, it is in the space between grayscale and color that we come to realize the assumption that progress across historical time has been made. The space between grayscale and color offers us a way to reconsider where we are in terms of the fights for justice, the fights for equity.

Continue Reading
 

No Spin Zone

petigura-383


Telescopes are now discovering Earth-size planets throughout the galaxy, but an astronomer at Berkeley explains why that doesn’t mean they are habitable.

There’s never been a more exciting time to be an astronomer. Our knowledge of planets around other stars is growing at a breathtaking pace. Only 20 years ago, we knew of only the nine familiar planets in our own solar system. We lacked a single example of a planet orbiting a star other than the sun. Since that time, we have discovered nearly 5,000 extrasolar planets.

Most of these planets were revealed by NASA’s Kepler Space Telescope in the past five years. Kepler is the most prolific planet-finding machine ever built. It can detect small, periodic dips in stellar brightness when a planet transits the face of its host star. We learn the planet’s size from the amount of blocked starlight, and we can convert the time between the dimmings into an orbital distance. (Year-long orbits correspond to Earth-like orbital distances.) We’ve found that most stars have at least one planet—­planets are the rule, not the exception. Most planets are small, Earth-size or slightly larger. One in five stars, like the sun, has an Earth-size planet that enjoys a similar amount of starlight as the Earth, amounting to some 40 billion such planets in our own galaxy alone.

A few of my favorites are Kepler 62e and Kepler 186f (names perhaps only an astronomer could love). They are Earth-size planets orbiting their suns at roughly the same distance as the Earth. Their discoveries answer ancient questions. For thousands of years, humans have looked up at the night sky and wondered if, out there, there could be other planets like Earth. The notion that there might be a “plurality of worlds” was considered and debated by some of the giants of ancient Greek philosophy: Anaximander, Democritus, Plato, and Aristotle. As an astronomer, it’s a privilege to work on a question that Aristotle would have appreciated. Now, for the first time in human history, we know for certain that there are other Earth-size planets on Earth-like orbits.

Continue Reading
 

Past Perfecting

past-383

Retouching is not merely servant to photography but is an artistic medium in its own right

Something remarkable has happened to our vision over the past 150 years, yet we can barely see it: Photo manipulation (more commonly known as retouching) has emerged as a form of visual communication in itself. Generally thought to be the technological servant of photography, retouching, in the broadest sense, also calls into question the veracity of photography, extending the realm of representation and threatening in some cases to supersede photography altogether as an art form all its own.

“The painter constructs, the photographer discloses,” Susan Sontag wrote in On Photography, but perhaps the two mediums need not be pitted against each other in all cases. More than ever, photographers can, like painters, choose to reject representation, and in clever ways with the aid of new tools. Historically, 19th century photo processes were a part of an evolution that was born of the painter’s hand — only now they were painting with light. In fact, Henry Peach Robinson, well known for his photo manipulations, began as a painter. His unique combination printing, which blended multiple negatives together, created a new way of altering the image and expanding photography’s creative potential.

The culture of retouching emerges from the contemporary camera’s unprecedented ability to capture minutia. A portrait taken will inevitably reveal all the little hairs that grow on an upper lip, each individual eye lash, and a chipped tooth in the lower corner of a smile. These characteristics in real life may not seem as perceivable to the naked eye, because the dynamic focus of an experience overshadows the importance of any of those details, which we now have the capacity to notice in a still image.

The brain’s short-term memory is extremely efficient, using its minimal bandwidth to absorb the information our visual cortex has learned is important. Viewing a static high-resolution digital photograph is considerably different. The brain it not constrained by the ever-changing nature of such an environment and is freed to take in and process visual content with a different type of criticality. As visual technology surpasses vision, it reveals more clearly how our sight only approximates reality, and how the high-resolution image may offer a “truer” version of reality — a hyperreality.

The media’s interest in photo manipulation has focused in on how it fails, fools, and of course, alludes. There is no lack of controversy surrounding, for example, the degree to which the human figure, especially the female form, is altered and how unspoken constructs are thereby reinforced in our daily visual diets. Photography, having flourished under capitalism, has become constrained by market demands, and as a result, the slippery, somatic landscape is far from resolved.

Photo manipulation is far from new, as Faking It: Manipulated Photography Before Photoshop, the aptly titled 2012 exhibition at The Metropolitan Museum of Art curated by Mia Fineman, demonstrated. The show shed light on a wide range of darkroom trickery used before newer digital methods, showing that the ingenuity to falsify imagery did not stem merely from the convenience of digital fakery. Rather, photography and manipulation have developed hand in hand. The exhibition suggests an inherent desire to manipulate images, which served as a guidepost for the development of visual technology to come.

In response, Ken Johnson’s New York Times review of Faking It asked, “If photography cannot capture truth, what is it good for?” The question reflects the prevailing angst about photo manipulation in contemporary culture, hewing to a traditional view that the perceived value of a photograph was in its integrity as evidence. As the politicized unmasking of manipulated photos in recent decades has rendered tacit belief in the truth of images problematic, the collective “we” becomes simultaneously skeptical and complacent about how it views photography. A postmodern outlook encourages us to question the imagery we see on a daily basis, yet does little to breed the capability for truly understanding it.

But augmenting reality, the show implies, is as valid and universal an artistic aim as capturing it. The “hidden” processes of manipulation don’t merely serve photography or help it distort truth, but they build on photography as a pretext for independent aims. The show suggests how a methodology can blossom from practical implementation to creative expression, and that the dialogue surrounding this type of imagery should adjust in tandem. Given that technology expands the nuance of our vision language faster than audiences and even artists can keep up with – what Mark Hofer and Kathleen Owings Swan call “cultural lag” — this gap is precisely where the conversation about what sort of practices can constitute visual art should begin. Some artists seize upon manipulation to measure this gap, to subvert photographic expectations, if not exceed them.

To this end, Pieter Hugo makes direct connections between tool and subject. In There’s a Place in Hell for Me and My Friends, he adjusts the individual color channels for each image to emphasize the complexion of his subject. As a result, they appear heavily marred by sun and scars, becoming the antithesis of more stereotypical, “airbrushed” images of magazines. His images becomes a signifier for a canon of beauty based on what is absent, not merely what is captured and enhanced.

Similarly, Asger Carlson, in his series Hester, begins with simply lit portraits and, with the aid of computer programs, sculpts new figures by stretching, distorting, and multiplying patches of skin. Their seemingly evidential frankness captures a subculture of distorted humanoids who have little need for ordinary anatomy. Limbs sprout and bulge impolitely into existence in the artist’s studio. For the Libertin DUNE No. 7 project, models in various degrees of undress balance between the tension of merging phalanxes and the allure of multiple pairs of painted lips, slyly selling them something sexy and sordid. His work highlights an underlying fantasy of perfection that society is not quite ready to part with. Our expectations of perfection—the perfect photo, the perfect body, the encouraged fantasy—are confronted by a demand to judge the methods of bringing that perfection about.

There is a logic to a photograph’s burden to be evidential, which carries over to constrain the more fantastical photography of, say, Annie Leibovitz, whose highly constructed tableaus still adhere to some loose definition of physics. She creates with a grandeur in mind that what she depicts could happen in the real world, even if we recognize that it didn’t. This work isn’t truly provocative, in terms of transcending representational expectations. It serves merely as a testament to a team of skillful technicians that are prepared to give us exactly what we want.

If there is to be a revival of the photograph as an artistic object, and not just as an evidential or commercial object, it will come through work that pushes the form and questions the elemental yet integral components of image making. We still want to believe photographs show us something about the world, the flattening of an environment into a two-dimensional plane that confers to our eyes a scene that somewhere must have existed. But photographic techniques have always rendered photos irreducibly subjective. Their art lies elsewhere.

Acknowledging manipulation as art emphasizes how we collectively collude in the verisimilitude of images. What makes an image true is not the fidelity of its reference to a verifiable outside reality, but instead, its reference to collective ideas of the real. This truth is no less objective for existing only in what we share, in the images we work together as a society, to sustain.

 

The Data Sublime

ds-383

The sublime unknowability of Big Data lets us fall in love with our own domination.

I have a memory from childhood, a happy memory — one of complete trust and comfort. It’s dark, and I’m kneeling in the tiny floor area of the back seat of a car, resting my head on the seat. I’m perhaps six years old. I look upward to the window, through which I can see streetlights and buildings rushing by in a foreign town whose name and location I’m completely unaware of. In the front seats sit my parents, and in front of them, the warm yellow and red glow of the dashboard, with my dad at the steering wheel.

Contrary to the sentiment of so many ads and products, this memory reminds me that dependence can be a source of deep, almost visceral pleasure: to know nothing of where one is going, to have no responsibility for how one gets there or the risks involved. I must have knelt on the floor of the car backward to further increase that feeling of powerlessness as I stared up at the passing lights.

The same feeling came back to me when my Apple laptop casually reported that it had some updating to do. If I accepted the lengthy terms and conditions, it would take care of everything else. To my surprise, this produced a familiar, almost visceral pleasure. The imbalance of responsibility, the comfortable presumption of trust, took me back to those late-night family-car journeys. I was traveling blind, but someone qualified was at the controls: This was a few years ago, and Apple’s then-chummy, soft-focus anti-Microsoft brand was enough to trigger that infantile trust. (In the age of iCloud, however, it would probably be closer to the sensation of hitching a lift with a driver who’s suddenly slurring his words.)

That innocent experience of the software upgrade — the relinquishing of control to something one does not understand or want to understand, consenting to a back-seat ride on sheer faith — is now a normal part of being a smartphone user, so normal that we scarcely notice it any longer. It is the sort of asymmetrical expertise one typically associates with a visit to the doctor — and it’s no surprise that apps hope to mediate that relationship as well.

How did we come to believe the phone knows best? When cultural and economic historians look back on the early 21st century, they will be faced with the riddle of how, in little more than a decade, vast populations came to accept so much quantification and surveillance with so little overt coercion or economic reward. The consequences of this, from the Edward Snowden revelations to the transformation of urban governance, are plain, yet the cultural and psychic preconditions remain something of a mystery. What is going on when people hand over their thoughts, selves, sentiments, and bodies to a data grid that is incomprehensible to them?

 

***

 

The liberal philosophical tradition explains this sort of surrender in terms of conscious and deliberate trade-offs. Our autonomy is a piece of personal property that we can exchange for various guarantees. We accept various personal “costs” for certain political or economic “benefits.” For Thomas Hobbes, relinquishing the personal use of force and granting the state a monopoly on violence is a prerequisite to any legal rights at all: “Freedom” is traded for “security.” In more utilitarian traditions, autonomy is traded for some form of economic benefit, be it pleasure, money, or satisfaction. What both accounts share is the presumption that no set of power relations could persist if individuals could not reasonably consent to it.

Does that fit with the quantified, mass-surveilled society? It works fine as a post-hoc justification: “Yes,” the liberal will argue, “people sacrifice some autonomy, some privacy — but they only do so because they value convenience, efficiency, pleasure, or security even more highly.” This suggests, as per rational-choice theory, that social media and smart technologies, like the Google Now “dashboard” that constantly feeds the user information on fastest travel routes and relevant weather information in real time, are simply driving cost savings into everyday life, cutting out time-consuming processes and delivering outcomes more efficiently, much as e-government contractors once promised to do for the state. Dating apps, such as Tinder, pride themselves on allowing people to connect to those who are nearest and most desirable and to block out everyone else.

Leaving aside the unattractiveness of this as a vision of friendship, romance, or society, there are several other problems with it. First, it’s not clear that a utilitarian explanation works even on its own limited terms to justify our surrender to technology. It does not help people do what they want: Today, people hunt desperately for ways of escaping the grid of interactivity, precisely so as to get stuff done. Apps such as Freedom (which blocks all internet connectivity from a laptop) and Anti-Social (which blocks social media specifically) are sold as productivity-enhancing. The rise of “mindfulness,” “digital detox,” and sleep gurus in the contemporary business world testifies to this. Preserving human capital in full working order is something that now involves carefully managed forms of rest and meditation, away from the flickering of data.

Second, the assumption that if individuals do something uncoerced, then it was because it was worth doing rests on a tightly circular argument that assumes that the autonomous, calculating self precedes and transcends whatever social situation it finds itself in. Such a strong theory of the self is scarcely tenable in the context for which it was invented, namely, the market. The mere existence of advertising demonstrates that few businesses are prepared to rely on mathematical forces of supply and demand to determine how many of their goods are consumed. Outside the market realm, its descriptive power falls to pieces entirely, especially given “smart” environments designed to pre-empt decision-making.

The theory of the rational-calculating self has been under quiet but persistent attack within the field of economics since the 1970s, resulting in the development of behavioral economics and neuroeconomics. Rather than postulate that humans never make mistakes about what is in their best interest, these new fields use laboratory experiments, field experiments, and brain scanners to investigate exactly how good humans are at pursuing their self-interest (as economists define it, anyway). They have become a small industry for producing explanations of why we really behave as we do and what our brains are really doing behind our backs.

From a cultural perspective, behavioral economics and neuroeconomics are less interesting for their truth value (which, after all, would have surprised few behavioral psychologists of the past century) but their public and political reception. The fields have been met with predictable gripes from libertarians, who argue that the critique of individual rationality is an implicit sanction for the nanny state to act on our behalf. Nonetheless, celebrity behaviorists such as Robert Cialdini and Richard Thaler have found an enthusiastic audience, not only among marketers, managers, and policymakers who are professionally tasked with altering behavior, but also the nonfiction-reading public, tapping into a far more pervasive fascination with biological selfhood and a hunger for social explanations that relieve individuals of responsibility for their actions.

The establishment of a Behavioural Insights Team within the British government in 2010 (and since privatized) is a case in point of this surprising new appetite for nonliberal or postliberal theories of individual decision making. Set against the prosaic nature of the team’s actual achievements, which have mainly involved slightly faster processing of tax and paperwork, the level of intrigue that surrounds it, and the political uses of behaviorism in general, seems disproportionate. The unit attracted some state-phobic critiques, but these have been far outnumbered by a half-mystical, half-technocratic media fascination with the idea of policymakers manipulating individual decisions. This poses the question of whether behavior change from above is attractive not in spite of its alleged paternalism but because of it.

Likewise, the notorious Facebook experiment on “emotional contagion” was understandably controversial. But would it be implausible to suggest that people were also enchanted by it? Was there not also a mystical seduction at work, precisely because it suggested some higher power, invisible to the naked eye? We assume, rationally, that the presence of such a power is dangerous. But it is no contradiction to suggest that it might also be comforting or mesmerizing. To feel part of some grand technocratic plan, even one that is never made public, has the allure of immersing the self in a collective, in a manner that had seemed to have been left on the political scrapheap of the 20th century.

 

***

 

Contrary to the liberal assumptions of rational-choice theory, the place of digital media in our society seems less about enhancing freedom than helping us — in the words of the Frankfurt School psychoanalyst Erich Fromm — escape freedom. Fromm worried that individuals would flee liberalism for authoritarianism. The warm feeling I received from being driven through the dark as a child would have looked to Fromm like a primary ingredient of possible fascism, should a leader manage to rekindle that same emotion in me. Today, however, it is less charismatic autocrats that threaten to evoke this feeling than a web of largely incomprehensible technological infrastructure. As sociologist Mark Andrejevic has argued, environments become “smart” so that we no longer have to be.

Common to both the charismatic leader and smart technology is their ability to evoke what Immanuel Kant described as the “sublime,” which, he argued, arises as a result of human cognition being utterly overwhelmed by an aesthetic experience. First we cower in terror, but then we quickly realize that everything is still okay. The discovery that the individual can survive, in spite of being overpowered, brings intense pleasure.

So in some ways, Big Data is an inappropriately cutesy and diminutive term for the system it describes. If it were merely big, like an elephant or the Empire State Building is big, it would not inspire the terror that induces us to relinquish our freedom to it. We should perhaps talk instead of a Data Sublime.

The notion of a Data Sublime has been suggested by art historian Julian Stallabrass in “What’s in a Face? Blankness and Significance in Contemporary Art Photography,” a 2007 article on photographic portraiture. Noting a trend towards blank, expressionless but technically awe-inspiring photographs of human subjects, manifest in the work of Rineke Dijkstra among others, Stallabrass argues that:

 subjective, creative choice has been subsumed in favour of greater resolution and bit depth, a measurable increase in the quantity of data. The manifest display of very large amounts of data in such images may be related to a broader trend in contemporary art to exploit the effect of the ‘data sublime’. In providing the viewer with the impression and spectacle of a chaotically complex and immensely large configuration of data, these photographs act much as renditions of mountain scenes and stormy seas did on nineteenth-century urban viewers.    

To this we might also add the recent popularity of Richard Linklater’s film Boyhood and novelist Karl Ove Knausgaard’s My Struggle, which — as I’ve suggested before — take the content of the social media age but subject it to an epically modernist reformulation. The sheer granularity of representation achieves an impact all of its own.

Fascism can be understood as a form of political sublime, combining overwhelming displays of physical force with false memories and histories. What we see in the current culture of quantification and self-surveillance also involves displays and rumors of almost unimaginable physical capability. How big is Big Data? If loaded onto CDs and placed on top of each other, the pile would reach all the way to the moon and back 10,000 times. This is an aesthetic claim, not a scientific one; it functions beyond empiricism to awe us.

But awe is not the Data Sublime’s only approach. It also insinuates itself with subtle cultural appropriations from history, redeeming bureaucracy — whose cold, quantitative, deindividualizing rationality has been under sustained rhetorical attack since the 1960s, first by the New Left and later by neoliberals parroting Marcusean rhetoric — as a kind of nostalgia. The imposing force of a faceless, bureaucratic hierarchy has developed its own aesthetic and psychoanalytic appeal, in an age when individuals are ostensibly responsible for whatever befalls them. The phenomenon of “Ostalgie,” the aestheticization of old East German brands and lifestyles, offers a glimpse of this, as does the enthusiasm for communist iconography that has crept into some reaches of the intellectual left over recent years. As we reach the 25th anniversary of the demise of state socialism, the idea of a uniformed officer demanding to see one’s papers can feel curiously seductive. How else to understand our desire to “share” so much information that we have no need or incentive to provide?  In terms of information architecture, the procedures of an early 20th century bureaucracy and the data analytics of, say, Facebook have little in common. But each offers individuals a taste of quantitative rationalism as a means to give themselves away.

The early quantified-self movement performed important work in helping aestheticize the habits and techniques of digital surveillance. Techniques for digitally auditing one’s body, moods, sleep, or behavior were developed and shared by artists and geeks in a spirit of playfulness and experimentation. The suggestion was that this was a new form of self government or autonomy. Processes of data collection and analysis were presented as fun and countercultural. Far from the fears of the “one-dimensional man” Marcuse warned of, the mechanical optimization of the self and body was framed as subversively creative.

Yet by stressing the self-authored artistic nature of this venture, it misrepresented what was to follow. Now that business models are emerging around tracking devices and self-surveillance —such as the integration of the Apple Watch with the health insurance industry — the affective appeal of quantification is to suspend the neoliberal injunction to self-create, or at least to share that responsibility with a data bank whose scale one cannot comprehend.

In this way, the Data Sublime confronts the individual with an almost irresistible paradox. Under neoliberal conditions, which stress individual authenticity above all else, this is an aesthetic which promises a higher order form of autonomy than that which is available through liberal appeals to consumer rationality. The appearance of “predictive shopping,” in which goods are selectively mailed to consumers on the basis of past behavior rather than expressed preference or choice (a case of what Rob Horning terms “pre-emptive personalization“), exemplifies the Data Sublime. First appalled by the loss of control, the consumer swiftly discovers that she is nevertheless receiving excellent customer service, and an even more intense pleasure resumes.

 

***

 

To whom or what are we relinquishing ourselves? And what do they want? The liberal fear is that we are subordinating ourselves to some master plan over which we have no democratic power. Political autonomy has been centralized. But for Fromm, things are more unnerving than that. According to his theory of authoritarianism, the “leader” is secretly as vulnerable as the “follower.” Unable to find any ethical purpose of its own, each party seeks it in the other.

This is the possibility that lurks within the Data Sublime. Sheer quantitative magnitude is as disturbing as exciting, no matter from which angle one perceives it. The engineers of the smart city or the sharing economy undoubtedly want to be rich. But the capacity for social control has now outgrown any currently available political project. Its sole purpose is to sate the more dispersed desire to be controlled.

In a November newspaper interview, Google CEO Larry Page confessed that he was no longer sure what his company was for. He admitted that, as the corporation moves into pharmaceutical research and bodily monitoring, it had outgrown its original mission statement to “organize the world’s information.” “We’re in a bit of uncharted territory,” he said. “We’re trying to figure it out. How do we use all these resources?”

We donate our identities to a sublime grid of quantification, ignorant of the ultimate ends to which this is put. What if there are no ends? Big Data’s proclaimed slaying of “theory” eventually spells existential crisis for all. The absence of any ideology behind the Data Sublime renders it a pure procedure, much as Kafka anticipated with respect to bureaucracy. The child enjoys not knowing where the car is heading. It doesn’t occur to him that the parent doesn’t know either.