The cluster of ideas, meanings, and implications associated with Web 2.0 has been amalgamating for the better part of a decade, steadily consolidating to the point where few would deny its cultural significance. The development of more sophisticated search engines and the promulgation of social media have combined to turn casual computer users into simultaneous producer-consumers with an ever-intensifying incentive to weave digital interfaces into all facets of their everyday life. The ubiquity of broadband access and the onslaught of gadgetry has allowed the internet to take on the characteristics of what autonomist Marxists like Paolo Virno and Toni Negri call the social factory, in which the effort we put into our social lives becomes a kind of covert work that can be co-opted by the tech companies that help us “share” and “connect.”
Those nice-sounding words mask the potentially exploitative aspects of the process. In “Free Labor: Producing Culture for the Digital Economy,” Tiziana Terranova argues that “the internet is about the extraction of value out of continuous, updateable work, and it is extremely labor-intensive.” Nicholas Carr has described Web 2.0 as “digital sharecropping,” a way of putting “the means of production into the hands of the masses but withholding from those same masses any ownership over the product of their work.” The internet thereby becomes “an incredibly efficient mechanism to harvest the economic value of the free labor provided by the very many and concentrate it into the hands of the very few.”
But if it is so exploitative, why do we bother with all the “sharing”? It may be because we don’t experience this effort as work but instead as simply being ourselves, which Web 2.0 seeks to make synonymous with digital participation. Services like Facebook succeed by making the process of ordering our social lives much more convenient — an apparently irresistible lure, as the site has recently passed the 500-million mark in users. Its ubiquity makes it hard to refuse to use it, as such a refusal becomes tantamount to rejecting sociality itself. But the service also has the effect of getting us to restructure our social life and our identity in its image, making us acutely self-conscious of identity as a strategic construct even as it grants us the opportunity to actively manage it more efficiently.
By supplying a forum in which we can curate our profiles and archive the ephemera that we think best represents us, social media essentially allows us to interact with one another as brands. Thus our social life as carried out online begins to more closely resemble our experience of the contemporary public sphere, itself entirely colonized by commercial messages and decodable contiguites of brands. On Facebook, though, that daunting surfeit of competing content can seem tractable, coherent, seemingly subject to our control. Social media allows us to assimilate ourselves and our identities to the codes of prestige and meaning circulating around us, the supple allure of the trends and fashions we are always apprising ourselves of at one level or another. As sociologist Eva Illouz points out in Cold Intimacies, our attempts to make it easier for others to engage with us — our emotional competencies — have become indistinguishable from our efforts to commodify the self and make it more rationally manageable.
Our perpetually updated personal content enriches the social experience of the Internet and intensifies its rewards, making it ever more enticing to spend more of our time there: sharing more, integrating it more thoroughly with our everyday activities, checking up on what’s happening there more and more frequently throughout the day, carrying it with us everywhere by means of portable devices. By developing our identity online, we seek social recognition from peers in the place where it now can best be found. But in the process we make the intimate workings of our personality into shared digital code to be repurposed for a host of commercial applications. Our efforts in friending one another and connecting create a social map whose byways can later be retraced by outside parties. The more effort we put into crafting identity online, the more material we supply to Facebook and search engines to associate with contextual ads and other marketing initiatives. For this organizational work we are compensated not with wages but with a stronger sense of self, measurable in hard, quantifiable terms. How many friends do you have? How often do they update? How many photos have you shared? How many times have they been looked at? And so on.
All of this is to say that as Web 2.0 has infiltrated our everyday life, it has transformed our habitus — sociologist Pierre Bourdieu’s term for our manifest and class-bound way of being in the social world — into an explicit productive force without our conscious consent. By continually enticing us to produce more and enrich our self-concept, it presents a clear danger to our ability to maintain a coherent sense of ourselves — to sustain a feeling of ontological security, as Anthony Giddens puts it. Inundated with digital information from all sides — from friends, marketers, and the fruits of own unbounded curiosity — we can fritter away our time shuffling and reshuffling the little bits of novelty without performing a synthesis. We generate an endless stream of meanings useful for consumerism’s need to regenerate itself, but none of these meanings can stick for us as part of a stable worldview. Our efforts to cope with the resulting data deluge are all productive and all increase the surfeit, exacerbating the problem. Our efforts to organize the flow intensifies it. The pressure for more makes us continually revise our influences, our opinions, our enthusiasms. As much as these revisions add to the universe of signifiers, they are structurally inhibited from making our own lives meaningful.
Meanwhile, though, for techno-utopian optimists in the Clay Shirky mold, Web 2.0 has remained a malleable, positive term, uttered to variously invoke the magic of the can’t-miss entrepreneurial opportunity, the beneficent discrimination of predictive recommendation services and targeted marketing, the dynamic genius of the hive mind, the utopian potential of frictionless collaboration and innovation in the networked society, the reinvention of the public sphere and personal intimacy at a new level of connectedness, the end of passive consumption, and the birth of the “digital surplus.” Web 2.0 allows us all to have our identities recognized as productive and socially necessary. Never before in human history has it been so important to society that we all become unique individuals who express our insuppressible creativity.
While Web 2.0’s elastic promise has inspired a new generation of dreamers to embrace the fantasy of fully realized human potential in a spontaneously harmonious society, it has also prompted a necessary backlash. Conservative complaints about the corrosive effects of media technology are at least as old as the printing press: Emerging mediums are always rending the fabric of community, allowing dangerous and radical ideas to spread unchecked, and foisting a diminished life of vicarious experience and self-blinkered narcissism on those seduced into new modes of literacy. Such longstanding critiques of the democratic potential of innovation have been given fresh urgency by the development of the internet; they have been re-versioned into a conservatism 2.0, occasionally voiced by unlikely ideologues who depict themselves as though they are only now discovering the necessity of limits.
We can’t buy into the productivity of limitless narcissism, the myth of our infinite freedom of autonomous identity making. The question is how to impose limits that don’t feel like circumscriptions — that is, how do we demystify the market’s promises of infinite exchangeability and couple them with limits on the self without those limits feeling arbitrary?
How these limits are framed is integral to the future of conservatism. If the limits are structured around an allegedly enlightened elitism, suddenly cherished as humane tradition and respect for the ineffable mysteries of life, conservatism remains open to the accusation that it seeks merely to impose hierarchies to justify the existing distribution of power. In the face of the excitement over digitized democratization, such an argument can no longer achieve one of the primary goals of any conservative ideology: masking the arbitrariness of the limits it seeks to impose.
Andrew Keen’s 2007 polemic The Cult of the Amateur is representative in this regard. A self-described Silicon Valley insider and pioneer of the “original internet dream,” Keen wakes up to find that technology has fostered a “dictatorship of idiots” by dismantling the mainstream media. Once regarded in tech-investor circles as an obsolete encumbrance to be disintermediated by entrepreneurial innovation, the traditional media are recast by Keen as a noble institution that “nourishes talent” and preserves “standards of truth, decency, and creativity.” If media professionals don’t filter online content for the public, Keen argues, the masses are liable to be further vulgarized by the overwhelming surfeit of their own voluntary contributions, which are inherently without value (otherwise they wouldn’t have been offered freely). Without cultural elites empowered to control public discourse and deify their chosen superstars, “the monkeys are running the show,” Keen declares.
That assessment bluntly conveys Keen’s contemptuous vision of our common humanity, echoing the traditional conservative lament about our irredeemably fallen nature and need for discipline imposed from above. Elsewhere Keen argues that we are “easily seduced, corrupted, and led astray” and thus “need rules and regulations to help control our behavior online.” But as tireless as Keen is in his denunciation of amateur content, he has a boundless faith in the arbiters of taste that once reigned in the culture industries. Like many conservative jeremiahs, Keen doesn’t recognize much of a need to justify the power of established elites; he tends to take for granted that readers share his belief that their cultural authority is always merited by supposed expertise or some form of implied divine right. That inherited advantages might be undeserved, that the judgment of professional tastemakers might be tainted by commercial interests and personal ambition, that elite expertise might be phony and credible qualifications obtainable outside of an institutionalized credentialing system never seems to occur to him. The legitimating force of tradition apparently makes any given power structure unimpeachable and any efforts to expand the ranks of those exercising power inherently wrongheaded.
The problems with Keen’s critique are manifest. His elitism is too naked to be persuasive to anyone who doesn’t already suspect they are safely among the elite themselves. He presupposes what he sets out to prove, that amateur production is garbage because it is amateurish, and refuses to engage the implications of this vast mobilization of knowledge work he laments, crudely oversimplifying the ideology of Web 2.0’s champions in order to ignore it. He is content to blame information overload on fallen human nature and the high proportion of morons in the general public, and offers repressive state intervention into the internet as the solution, justified by played-up fears of moral degeneration, sexual predators, and intellectual property protection.
Jaron Lanier, another tech-industry veteran, presents a slightly more compelling case in You Are Not a Gadget, shifting the analysis of information overload away from the idiots creating content out of turn to the technology that permits such creation. The limits he seeks to impose on individual creativity are presented as a desperate attempt to protect the very possibility of creativity, which he regards as threatened by the way digitization transforms information into a commons susceptible to overexploitation, like open seas fished to extinction.
Lanier begins with a bracing description of how his book will itself be disseminated and parsed digitally and hence “mostly will be read by nonpersons”. It’s intended as a veiled challenge: Are you still a person? Would you even know? Are you reading, or are you just processing for the benefit of the “lords of the cloud” as he calls the ultimate beneficiaries of all our online labor. The gambit suggests that Lanier, a dreadlocked computer scientist credited with helping pioneer virtual-reality technology, is prepared to launch into a Marcusean critique of one-dimensionality, revealing how apparent freedoms of expression online are actually the cloaked chains of servitude. In the book’s first section, Lanier sounds downright dialectical, asserting that technology produces us as much as we produce it. “People degrade themselves to make machines seem smart all the time,” he claims, and cites a “lock-in effect” by which technological approximations of reality subsequently begin to determine that reality — the limitations of MIDI yield a music that ignores what MIDI can’t capture, the social tools of Facebook come to constitute the boundaries of friendship, and so on. In each case, technology offers an expediency that seems liberating at first, only to later reveal itself as a constraint that occludes alternatives. “These designs came together very recently,” he warns, “and there’s a haphazard, accidental quality to them. Resist the easy grooves they guide you into. If you love a medium made of software, there’s a danger that you will become entrapped in someone else’s recent careless thoughts.”
Marcuse, too, was extremely wary of those easy cultural grooves we are brought to settle into by technology. Epitomizing the position of many Frankfurt School thinkers and others on the postwar Left, he argued in One-Dimensional Man that technological change brought the “flattening out of the antagonism between culture and social reality through the obliteration of the oppositional, alien, and transcendent elements in the higher culture.” Media technologies stimulate new needs, the satisfaction of which keep us happily distracted. “The power over man which this society has acquired is daily absolved by its efficacy and productiveness.” The result is a generation incapable of thinking of alternatives. The administered society’s “supreme promise is an ever-more-comfortable life for an ever-growing number of people who, in a strict sense, cannot imagine a qualitatively different universe of discourse and action, for the capacity to contain and manipulate subversive imagination and effort is an integral part of the given society.”
Though Lanier’s critique shares some Marcusean assumptions about the flattening of cultural possibilities (“The deep meaning of personhood is being reduced by the illusion of bits,” he writes) he never goes as far as blaming the existing power structure. Far from attributing these ills to the imperatives of capitalism, he’s more inclined to blame individuals for failing to uphold a quasi-Randian resistance to sharing. Guided by an apparent distrust of all online collective enterprises, he argues, in high Cold War dudgeon, that they are a form of “digital Maoism” that nullifies the individual spirit while endeavoring to turn all the idealistic dupes who participate in them into robot slaves to the evil, mediocre hive mind.
If not as nakedly reactionary as Keen, Lanier still has a proclivity for cranky hyperbole, for example, sidling remix culture on the crackpot continuum next to yearnings for the posthuman future:
Humans are free. We can commit suicide for the benefit of a Singularity. We can engineer our genes to better support an imaginary hive mind. We can make culture and journalism into second-rate activities and spend centuries remixing the detritus of the 1960s and other eras from before individual creativity went out of fashion.
Creativity, however, has never been more in fashion than it is today; Web 2.0 applications never seem to tire of exhorting us to show just how creative we are. What Lanier is talking about when he mentions “individual creativity” is intellectual property, which has indeed gone “out of fashion” with a substantive proportion of internet users who have embraced what for capitalism has proven to be the dark side of networked economy, piracy and peer-to-peer file-sharing.
Lanier purports to be concerned with humanity, but his true concern is property. Despite its old-school humanist trappings and the left-leaning logic it occasionally invokes, You Are Not a Gadget is no less a pro-business book than Keen’s; its core idea is a full-throated defense of intellectual property against any notion of digital abundance, and its rhetoric is often fueled by the same conservative longing for culture made and administered by an elite. But that view is overwritten (and, to a degree, cloaked) by a critique that emphasizes the idea that the integrity of the human spirit is somehow bound up with the integrity of intellectual property, and that any impulse to remove the profit motive from socially necessary labor is a perverse aberration, an assault on the very concept of personhood.
The underlying idea is that people are motivated to genuinely create only by rewards rather than by the pleasure of creation or participation itself. Web 2.0 voluntarism, Lanier argues, is an inauthentic form of expression, for in order to be authentic, it must have an unambiguous value assigned to it by the market, the proxy for real social recognition under capitalism. “I believe most people would embrace a social contract in which bits have value instead of being free,” he explains. “Everyone would have easy access to everyone else’s creative bits at reasonable prices — and everyone would get paid for their bits. The arrangement would celebrate personhood in full, because personal expression would be valued.”
Here Lanier moves a long way from Keen, who calls the idea of paying internet users for the content they create “crazy.” But like Keen, Lanier nonetheless regards genuine artists as entrepreneurs first and foremost; those who are not motivated by profit are dilettantes whose work is inherently bad. He argues that artistic production for markets is more unconstrained than production for a patron, as if there were no constraints in having to appeal to the lowest common denominator and as if no one ever successfully created art as a sideline, without patrons at all, in the manner, say, of insurance-company lawyer Wallace Stevens, or Anthony Trollope, who managed to grind out triple-decker novels for years while working full-time for the post office, not to mention folk artists like James Hampton. Such people, in Lanier’s eyes, pursue art as a “vanity career” since it is not the source of their livelihood.
Instead, Lanier champions closed models of product development embedded in an explicitly corporate hierarchy directed by those he regards as our real contemporary geniuses: “The iPhone, the Pixar movies, and all the other beloved successes of digital culture that involve innovation in the result as opposed to the ideology of creation. In each case, these are personal expressions. True, they often involve large groups of collaborators, but there is always a central personal vision — a Will Wright, a Steve Jobs, or a Brad Bird conceiving the vision and directing a team of people earning salaries.” Creativity, then, and personal expression are best limited to those towering figures with the capital amassed to recruit an army of wage slaves to implement their vision.
Essentially, the limits on the information deluge, in Lanier’s view, should be set not by regulation and elite gatekeepers so much as by the unerring judgment of the market and its allocation of capital. Human freedom and autonomy is not the ability to construct and exhibit the self and express it through a medium with nearly unlimited reach — the utopian Web 2.0 vision — but is instead the ability of entrepreneurs to seize appropriate (that is, profitable) opportunities. The limits that must be imposed on the elaboration of plebian selves online are the limits on profit, reconceived as the limits of any idea’s inherent usefulness. The danger Lanier touts is that the useless information will nullify the value of the information that capital can actually make use of. This echoes Hayek’s central argument in “The Use of Knowledge in Society”: Free labor muddies the price signals that indicate what labor society actually needs.
With that in mind, Lanier crusades for “bringing expression back under the tent of capitalism” and proffers elaborate ruses to institute “artificial scarcities” to accomplish it — a backward-looking effort to crush the digital surplus. But that surplus is becoming increasingly profitable to new media companies, and it offers capital an opportunity to recast the terms of domination: Profit is not expropriated from workers so much as it is gleaned unobtrusively, as a by-product of everyone’s personal process of self-discovery. The “hive mind” — the collective wellspring of knowledge that exceeds the limits of property — need not yield only mediocrity, as Lanier assumes. Instead, it has already begun to redraw the boundaries of the market, colonizing new spaces. The manifest reality of the social factory in the form of the internet has mobilized more creative energy than was ever before possible, even if the creators can only benefit from it at the social level rather than in an increase in private wages. Lanier would like to restore the grip of capital’s dead hand tightly around our throats, stifling our attempts to sing the song of ourselves.
Market-oriented ideology is no longer sufficiently convincing as a Web 2.0 critique, if ever it were so. The whole point of the Web 2.0 revolution is that it explodes the market for intellectual labor and allows it to be co-opted indirectly. But Lanier’s reinvigoration of old left ideas about technological determinism suggests another way out, one which Nicholas Carr’s The Shallows explores. With Carr’s emphasis on neurological studies and interior cognitive effects of internet use, conservative tech critique veers away from elitism and free-market apologetics toward an emphasis on the joys of quiet contemplation and quasi-empirical approach to human limitation. Carr resuscitates the broad idea of neuroplasticity — the brain’s propensity to change in accordance to stimuli _ to raise doubts about digital consumption of culture and multitasking, including the possibility that Google is making us stupid, as the title of his widely heralded Atlantic article pondered. The problem with Web 2.0 remains the same — it vulgarizes society by coarsening our ability to respond to the aspects of culture that are truly worthy. Only the limits are no longer to be implemented by the state, a group of elites, or prescient entrepreneurs, but by our own biology.
We have an evolutionary mandate to consume novelty, the argument goes, and assimilate as much information as possible in Whitmanesque gestures of capacious self-making (“I am large, I contain multitudes”), but this presumably burns us out eventually, leaving us in a state of perpetual distraction, denying us an authentic relation to the information we frantically try to assimilate. As Alan Jacobs, an English professor at Wheaton University, suggested in a July 2010 blog post, we can no longer distinguish between “consuming” and “listening” — our interaction with culture is always experienced as information processing rather than the warm bath of aesthetic appreciation. The myth of limitless self-identity to match the infinite flow of data meets resistance, limits, meaning in boundaries — essentially the traditional conservative celebration of hierarchy and “natural” order.
This view opens the door for a subtler sort of conservative critique than ham-fisted denunciation of democratic participation: a return to the idea of organic limits to our information intake to mirror the passion, mainly among the lifestyle left, for organic and local, slow modes of material consumption. These impose natural-seeming limits and ersatz traditions that seem far less arbitrary than highbrow pronouncements about the sort of culture that is worthy, and far more enforceable than intellectual-property strictures made increasingly moribund by digitization. Instead the left and right critique converge on the exaltation of organic, realexperience.
Tech conservatism becomes a matter of gestures of resistance to the infinite flow, refusing the deluge, rejecting plenitude by choice rather than by necessity — an impossible choice from within the paradigm of rationality from neoclassical economics. It must define the self in terms of what appears to be its willful ignorance, not its assimilation of endless iterations of novelty. It must privilege instead immediate and unmediated experience as integral to being, to self-recognition. From this perspective, buying vinyl records, for example, becomes a profoundly conservative act rather than the hip posturing of the would-be bohemian leisure class. It’s an act of resistance, even defiance, but essentially it remains a nostalgic yearning for the old order, for the limits and hierarchies of the culture industry at its zenith. If we can imagine nothing more than that, this these are the noble gestures we’re left with.