The sublime unknowability of Big Data lets us fall in love with our own domination.
I have a memory from childhood, a happy memory — one of complete trust and comfort. It's dark, and I'm kneeling in the tiny floor area of the back seat of a car, resting my head on the seat. I'm perhaps six years old. I look upward to the window, through which I can see streetlights and buildings rushing by in a foreign town whose name and location I'm completely unaware of. In the front seats sit my parents, and in front of them, the warm yellow and red glow of the dashboard, with my dad at the steering wheel.
Contrary to the sentiment of so many ads and products, this memory reminds me that dependence can be a source of deep, almost visceral pleasure: to know nothing of where one is going, to have no responsibility for how one gets there or the risks involved. I must have knelt on the floor of the car backward to further increase that feeling of powerlessness as I stared up at the passing lights.
The same feeling came back to me when my Apple laptop casually reported that it had some updating to do. If I accepted the lengthy terms and conditions, it would take care of everything else. To my surprise, this produced a familiar, almost visceral pleasure. The imbalance of responsibility, the comfortable presumption of trust, took me back to those late-night family-car journeys. I was traveling blind, but someone qualified was at the controls: This was a few years ago, and Apple's then-chummy, soft-focus anti-Microsoft brand was enough to trigger that infantile trust. (In the age of iCloud, however, it would probably be closer to the sensation of hitching a lift with a driver who’s suddenly slurring his words.)
That innocent experience of the software upgrade — the relinquishing of control to something one does not understand or want to understand, consenting to a back-seat ride on sheer faith — is now a normal part of being a smartphone user, so normal that we scarcely notice it any longer. It is the sort of asymmetrical expertise one typically associates with a visit to the doctor — and it’s no surprise that apps hope to mediate that relationship as well.
How did we come to believe the phone knows best? When cultural and economic historians look back on the early 21st century, they will be faced with the riddle of how, in little more than a decade, vast populations came to accept so much quantification and surveillance with so little overt coercion or economic reward. The consequences of this, from the Edward Snowden revelations to the transformation of urban governance, are plain, yet the cultural and psychic preconditions remain something of a mystery. What is going on when people hand over their thoughts, selves, sentiments, and bodies to a data grid that is incomprehensible to them?
***
The liberal philosophical tradition explains this sort of surrender in terms of conscious and deliberate trade-offs. Our autonomy is a piece of personal property that we can exchange for various guarantees. We accept various personal “costs” for certain political or economic “benefits.” For Thomas Hobbes, relinquishing the personal use of force and granting the state a monopoly on violence is a prerequisite to any legal rights at all: “Freedom” is traded for “security.” In more utilitarian traditions, autonomy is traded for some form of economic benefit, be it pleasure, money, or satisfaction. What both accounts share is the presumption that no set of power relations could persist if individuals could not reasonably consent to it.
Does that fit with the quantified, mass-surveilled society? It works fine as a post-hoc justification: "Yes," the liberal will argue, "people sacrifice some autonomy, some privacy — but they only do so because they value convenience, efficiency, pleasure, or security even more highly." This suggests, as per rational-choice theory, that social media and smart technologies, like the Google Now "dashboard" that constantly feeds the user information on fastest travel routes and relevant weather information in real time, are simply driving cost savings into everyday life, cutting out time-consuming processes and delivering outcomes more efficiently, much as e-government contractors once promised to do for the state. Dating apps, such as Tinder, pride themselves on allowing people to connect to those who are nearest and most desirable and to block out everyone else.
Leaving aside the unattractiveness of this as a vision of friendship, romance, or society, there are several other problems with it. First, it's not clear that a utilitarian explanation works even on its own limited terms to justify our surrender to technology. It does not help people do what they want: Today, people hunt desperately for ways of escaping the grid of interactivity, precisely so as to get stuff done. Apps such as Freedom (which blocks all internet connectivity from a laptop) and Anti-Social (which blocks social media specifically) are sold as productivity-enhancing. The rise of "mindfulness," "digital detox," and sleep gurus in the contemporary business world testifies to this. Preserving human capital in full working order is something that now involves carefully managed forms of rest and meditation, away from the flickering of data.
Second, the assumption that if individuals do something uncoerced, then it was because it was worth doing rests on a tightly circular argument that assumes that the autonomous, calculating self precedes and transcends whatever social situation it finds itself in. Such a strong theory of the self is scarcely tenable in the context for which it was invented, namely, the market. The mere existence of advertising demonstrates that few businesses are prepared to rely on mathematical forces of supply and demand to determine how many of their goods are consumed. Outside the market realm, its descriptive power falls to pieces entirely, especially given “smart” environments designed to pre-empt decision-making.
The theory of the rational-calculating self has been under quiet but persistent attack within the field of economics since the 1970s, resulting in the development of behavioral economics and neuroeconomics. Rather than postulate that humans never make mistakes about what is in their best interest, these new fields use laboratory experiments, field experiments, and brain scanners to investigate exactly how good humans are at pursuing their self-interest (as economists define it, anyway). They have become a small industry for producing explanations of why we really behave as we do and what our brains are really doing behind our backs.
From a cultural perspective, behavioral economics and neuroeconomics are less interesting for their truth value (which, after all, would have surprised few behavioral psychologists of the past century) but their public and political reception. The fields have been met with predictable gripes from libertarians, who argue that the critique of individual rationality is an implicit sanction for the nanny state to act on our behalf. Nonetheless, celebrity behaviorists such as Robert Cialdini and Richard Thaler have found an enthusiastic audience, not only among marketers, managers, and policymakers who are professionally tasked with altering behavior, but also the nonfiction-reading public, tapping into a far more pervasive fascination with biological selfhood and a hunger for social explanations that relieve individuals of responsibility for their actions.
The establishment of a Behavioural Insights Team within the British government in 2010 (and since privatized) is a case in point of this surprising new appetite for nonliberal or postliberal theories of individual decision making. Set against the prosaic nature of the team's actual achievements, which have mainly involved slightly faster processing of tax and paperwork, the level of intrigue that surrounds it, and the political uses of behaviorism in general, seems disproportionate. The unit attracted some state-phobic critiques, but these have been far outnumbered by a half-mystical, half-technocratic media fascination with the idea of policymakers manipulating individual decisions. This poses the question of whether behavior change from above is attractive not in spite of its alleged paternalism but because of it.
Likewise, the notorious Facebook experiment on “emotional contagion” was understandably controversial. But would it be implausible to suggest that people were also enchanted by it? Was there not also a mystical seduction at work, precisely because it suggested some higher power, invisible to the naked eye? We assume, rationally, that the presence of such a power is dangerous. But it is no contradiction to suggest that it might also be comforting or mesmerizing. To feel part of some grand technocratic plan, even one that is never made public, has the allure of immersing the self in a collective, in a manner that had seemed to have been left on the political scrapheap of the 20th century.
***
Contrary to the liberal assumptions of rational-choice theory, the place of digital media in our society seems less about enhancing freedom than helping us — in the words of the Frankfurt School psychoanalyst Erich Fromm — escape freedom. Fromm worried that individuals would flee liberalism for authoritarianism. The warm feeling I received from being driven through the dark as a child would have looked to Fromm like a primary ingredient of possible fascism, should a leader manage to rekindle that same emotion in me. Today, however, it is less charismatic autocrats that threaten to evoke this feeling than a web of largely incomprehensible technological infrastructure. As sociologist Mark Andrejevic has argued, environments become “smart” so that we no longer have to be.
Common to both the charismatic leader and smart technology is their ability to evoke what Immanuel Kant described as the “sublime,” which, he argued, arises as a result of human cognition being utterly overwhelmed by an aesthetic experience. First we cower in terror, but then we quickly realize that everything is still okay. The discovery that the individual can survive, in spite of being overpowered, brings intense pleasure.
So in some ways, Big Data is an inappropriately cutesy and diminutive term for the system it describes. If it were merely big, like an elephant or the Empire State Building is big, it would not inspire the terror that induces us to relinquish our freedom to it. We should perhaps talk instead of a Data Sublime.
The notion of a Data Sublime has been suggested by art historian Julian Stallabrass in "What’s in a Face? Blankness and Significance in Contemporary Art Photography," a 2007 article on photographic portraiture. Noting a trend towards blank, expressionless but technically awe-inspiring photographs of human subjects, manifest in the work of Rineke Dijkstra among others, Stallabrass argues that:
subjective, creative choice has been subsumed in favour of greater resolution and bit depth, a measurable increase in the quantity of data. The manifest display of very large amounts of data in such images may be related to a broader trend in contemporary art to exploit the effect of the ‘data sublime’. In providing the viewer with the impression and spectacle of a chaotically complex and immensely large configuration of data, these photographs act much as renditions of mountain scenes and stormy seas did on nineteenth-century urban viewers.
To this we might also add the recent popularity of Richard Linklater's film Boyhood and novelist Karl Ove Knausgaard's My Struggle, which — as I've suggested before — take the content of the social media age but subject it to an epically modernist reformulation. The sheer granularity of representation achieves an impact all of its own.
Fascism can be understood as a form of political sublime, combining overwhelming displays of physical force with false memories and histories. What we see in the current culture of quantification and self-surveillance also involves displays and rumors of almost unimaginable physical capability. How big is Big Data? If loaded onto CDs and placed on top of each other, the pile would reach all the way to the moon and back 10,000 times. This is an aesthetic claim, not a scientific one; it functions beyond empiricism to awe us.
But awe is not the Data Sublime's only approach. It also insinuates itself with subtle cultural appropriations from history, redeeming bureaucracy — whose cold, quantitative, deindividualizing rationality has been under sustained rhetorical attack since the 1960s, first by the New Left and later by neoliberals parroting Marcusean rhetoric — as a kind of nostalgia. The imposing force of a faceless, bureaucratic hierarchy has developed its own aesthetic and psychoanalytic appeal, in an age when individuals are ostensibly responsible for whatever befalls them. The phenomenon of “Ostalgie,” the aestheticization of old East German brands and lifestyles, offers a glimpse of this, as does the enthusiasm for communist iconography that has crept into some reaches of the intellectual left over recent years. As we reach the 25th anniversary of the demise of state socialism, the idea of a uniformed officer demanding to see one's papers can feel curiously seductive. How else to understand our desire to “share” so much information that we have no need or incentive to provide? In terms of information architecture, the procedures of an early 20th century bureaucracy and the data analytics of, say, Facebook have little in common. But each offers individuals a taste of quantitative rationalism as a means to give themselves away.
The early quantified-self movement performed important work in helping aestheticize the habits and techniques of digital surveillance. Techniques for digitally auditing one's body, moods, sleep, or behavior were developed and shared by artists and geeks in a spirit of playfulness and experimentation. The suggestion was that this was a new form of self government or autonomy. Processes of data collection and analysis were presented as fun and countercultural. Far from the fears of the "one-dimensional man" Marcuse warned of, the mechanical optimization of the self and body was framed as subversively creative.
Yet by stressing the self-authored artistic nature of this venture, it misrepresented what was to follow. Now that business models are emerging around tracking devices and self-surveillance —such as the integration of the Apple Watch with the health insurance industry — the affective appeal of quantification is to suspend the neoliberal injunction to self-create, or at least to share that responsibility with a data bank whose scale one cannot comprehend.
In this way, the Data Sublime confronts the individual with an almost irresistible paradox. Under neoliberal conditions, which stress individual authenticity above all else, this is an aesthetic which promises a higher order form of autonomy than that which is available through liberal appeals to consumer rationality. The appearance of "predictive shopping," in which goods are selectively mailed to consumers on the basis of past behavior rather than expressed preference or choice (a case of what Rob Horning terms "pre-emptive personalization"), exemplifies the Data Sublime. First appalled by the loss of control, the consumer swiftly discovers that she is nevertheless receiving excellent customer service, and an even more intense pleasure resumes.
***
To whom or what are we relinquishing ourselves? And what do they want? The liberal fear is that we are subordinating ourselves to some master plan over which we have no democratic power. Political autonomy has been centralized. But for Fromm, things are more unnerving than that. According to his theory of authoritarianism, the “leader” is secretly as vulnerable as the “follower.” Unable to find any ethical purpose of its own, each party seeks it in the other.
This is the possibility that lurks within the Data Sublime. Sheer quantitative magnitude is as disturbing as exciting, no matter from which angle one perceives it. The engineers of the smart city or the sharing economy undoubtedly want to be rich. But the capacity for social control has now outgrown any currently available political project. Its sole purpose is to sate the more dispersed desire to be controlled.
In a November newspaper interview, Google CEO Larry Page confessed that he was no longer sure what his company was for. He admitted that, as the corporation moves into pharmaceutical research and bodily monitoring, it had outgrown its original mission statement to “organize the world's information.” "We're in a bit of uncharted territory,” he said. "We're trying to figure it out. How do we use all these resources?"
We donate our identities to a sublime grid of quantification, ignorant of the ultimate ends to which this is put. What if there are no ends? Big Data's proclaimed slaying of “theory” eventually spells existential crisis for all. The absence of any ideology behind the Data Sublime renders it a pure procedure, much as Kafka anticipated with respect to bureaucracy. The child enjoys not knowing where the car is heading. It doesn’t occur to him that the parent doesn’t know either.
Comments are closed.