twitter
facebook twitter tumblr newsletter
blog-marginal-174
Marginal Utility
By Rob Horning
A blog about consumerism, technology and ideology.
rss feed

Permanent Recorder

Screen Shot 2015-03-05 at 1.15.22 PMIt used to be easy to mock reality TV for having nothing to do with actual reality — the scenarios were contrived and pre-mediated, the performances were semi-scripted, the performers were hyper-self-conscious. These shows were more a negation of reality than a representation of it; part of their appeal seemed to be in how they helped clarify for viewers the genuine “reality” of their own behavior, in contrast with the freak shows they were seeing on the screen. To be real with people, these shows seemed to suggest, just don’t act like you are on television.

But now we are all on television all the time. The once inverted anti-reality of reality TV has turned out to be prefigurative. In a recent essay for the New York Times, Colson Whitehead seizes on the reality TV conceit of a “loser edit” — how a shows’ editors pare down and frame the footage of certain participants to make their incipient failure seem deserved — and expands it into a metaphor for our lives under ubiquitous surveillance.

The footage of your loser edit is out there as well, waiting … From all the cameras on all the street corners, entryways and strangers’ cellphones, building the digital dossier of your days. Maybe we can’t clearly make out your face in every shot, but everyone knows it’s you. We know you like to slump. Our entire lives as B-roll, shot and stored away to be recut and reviewed at a moment’s notice when the plot changes: the divorce, the layoff, the lawsuit. Any time the producers decide to raise the stakes.

Whitehead concludes that the important thing is that everyone gets an edit inside their own head, which suggests that the imposition of a reality TV frame on our lives has been clarifying. “If we’re going down, let us at least be a protagonist, have a story line, not be just one of those miserable players in the background. A cameo’s stand-in. The loser edit, with all its savage cuts, is confirmation that you exist.” Reality TV models for us what it is like to be a character in our own life story, and it gives us a new metaphor for how to accomplish this — we don’t need to be a bildungsroman author but instead a savvy cutting-room editor. Accept that your life is footage, and you might even get good at making a winner’s edit for yourself.

You could draw a similar conclusion from Facebook’s Timeline, and the year-in-review videos the company has taken to making of one’s raw profile data. These aren’t intrusive re-scriptings of our experience but instructional videos into how to be a coherent person for algorithms — which, since these algorithms increasingly dictate what others see of you, is more or less how you “really” are in your social networks. Facebook makes the winner’s edit of everybody, because everyone supposedly wins by being on Facebook. Everyone gets to be connected and the center of the universe simultaneously. So why not bequeath to it final-cut rights for your life’s edit?

Tech consultant Alistair Croll, in post at O’Reilly Radar, is somewhat less complacent about our surrendering our editing rights. He makes the case that since everyone henceforth will be born into consolidated blanket surveillance, they will be nurtured by a symbiotic relationship with their own data timeline. “An agent with true AI will become a sort of alter ego; something that grows and evolves with you … When the machines get intelligent, some of us may not even notice, because they’ll be us and we’ll be them.”

In other words, our cyborg existence will entail our fusion not with some Borg-like hive mind that submerges us into a collective, but with a machine powered by our own personal data that represents itself as already part of ourselves. The algorithms will be learning how to edit our lives for us from the very start, and we may not recognize this editing as stemming from an outside entity. The alien algorithms ease themselves into control over us by working with our uniquely personal data, which will feel inalienable because it is so specifically about us, though the very fact of its collection indicates that it belongs to someone else. Our memories will be recorded by outside entities so thoroughly that we will intuitively accept those entities as a part of us, as an extension of the inside of our heads. Believing that something that is not us could have such a complete archive of our experiences may prove to be to unacceptable, too dissonant, too terrifying.

Croll argues that this kind of data-driven social control, with algorithms dictating the shape and scope of our lives for us, will be “the moral issue of the next decade: nobody should know more about you than you do.” That sounds plausible enough, if you take it to mean (as Croll clearly does) that no one should use against you data that you don’t know has been collected about you. (Molly Knefel discusses a similar concern here, in an essay about how kids will be confronted by their permanent records, which reminds me of the “right to be forgotten” campaign.) But it runs counter to the cyborg idea — it assumes we will be able to draw a clear line between ourselves and the algorithms. If we can’t distinguish between these, it will be nonsensical to worry about which has access to more data about ourselves. It will be impossible to say whether you or the algorithms “knew” some piece of information about you first, particularly when the algorithms will be synthesizing data about us and then teaching it to us.

In that light, the standard that “no one should know more about you than you do” starts to seem clearly absurd. Outside entities are producing knowledge about us all the time in ways we can’t control. Other people are always producing knowledge about me, from their perspective and for their own purposes, that I can never access. They will always know “more about me” than I do by virtue of their having a point of view on the world that I can’t calculate and replicate.

Because we find it hard to assign a point of view to a machine, we perhaps think they can’t know more about us or have a perspective that isn’t fully controllable by someone, if not us. Croll is essentially arguing that we should have control over what knowledge a company’s machines produce about us. That assumes that their programmers can fully control their algorithms, which seems to be less the case the more sophisticated they become — the fact that the algorithms turn out results that no one can explain may be the defining point at which data becomes Big Data, as Mike Pepi explains here. And if the machines are just proxies for the people who program them, Croll’s “moral issue” still boils down to a fantasy of extreme atomization — the demand that my identity be entirely independent of other people, with no contingencies whatsoever.

The ability to impose your own self-concept on others is a matter of power; you can demand it, say, as a matter of customer service. This doesn’t change what those serving you know and think about you, but it allows you to suspend disbelief about it. Algorithms that serve us don’t allow for such suspension of disbelief, because they anticipate what service we might expect and put what they know about us into direct action. Algorithms can’t have opinions about us that they keep to themselves. They can’t help but reveal al all times that they “know more about us” — that is, they know us different from how we know ourselves.

Rather than worry about controlling who can produce information about us, it may be more important to worry about the conflation of data with self-knowledge. The radical empiricism epitomized by the Quantified Self movement is becoming more and more mainstream as tracking devices that attempt to codify us as data become more prevalent — and threaten to become mandatory for various social benefits like health insurance. Self-tracking suggests that consciousness is a useless guide to knowing the self, generating meaningless opinions about what is happening to the self while interfering with the body’s proper responses to its biofeedback. It’s only so much subjectivity. Consciousness should subordinate itself to the data, be guided more automatically by it.  And you need control of this data to control what you will think of yourself in response to it, and to control the “truth” about yourself.

Reducing self-knowledge to matters of data possession and retention like that seems to be the natural bias of a property-oriented society; as consciousness can’t be represented as a substance than someone can have more or less of, therefore it doesn’t count. But self-knowledge may not be a matter of having the most thorough archive of your deeds and the intentions behind them. It is not a quantity of memories, an amount of data. The self is not a terrain to which you are entitled to own the most detailed map. Self-knowledge is not a matter of reading your own permanent record. It is not an edit of our life’s footage.

A quantified basis for “self-knowledge” is bound up with the incentives for using social media and submitting to increased surveillance of various forms. If we accept that self-knowledge is akin to a permanent record, we will tolerate or even embrace Facebook’s keeping that record for us. Maybe we won’t even mind that we can’t actually delete anything from their servers.

As our would-be permanent recorders, social media sites are central to both data collection (they incite us to supply data as well as help organize what is collected across platforms into a single profile) and the use of data to implement social control (they serve algorithmically derived content and marketing while slotting us into ad hoc niches, and they encircle us in a panoptic space that conditions our behavior with the threat of observation). But for them to maintain their central place, we may have to be convinced to accept the algorithmic control they implement as a deeper form of self-knowledge.

But what if we use social media not for self-knowledge but for self-destruction? What if we use social media to complicate the idea that we could ever “know ourselves”? What if we use social media to make ourselves into something unknowable? Maybe we record the footage of our lives to define therein what the essence of our self isn’t. To the degree that identity is a prison, self-knowledge makes the cell’s walls. But self-knowledge could instead be an awareness of how to move beyond those walls.

Not everyone has the opportunity to cast identity aside any more than they have the ability to unilaterally assert self-knowledge as a form of control. We fall into the trap of trying to assert some sort of objectively “better” or more “accurate” identity that reflects our “true self,” which is only so much more data that can be used to control us and remold the identity that is assigned to us socially. The most luxurious and privileged condition may be one in which you get to experience yourself as endlessly surprising — a condition in which you hardly know yourself at all but have complete confidence that others know and respect you as they should.

Previously by