There’s been an explosion of writers employing strategies of copying and appropriation over the past few years, with the computer encouraging writers to mimic its workings. When cutting and pasting are integral to the writing process, it would be mad to imagine that writers wouldn’t exploit these functions in extreme ways that weren’t intended by their creators.

While home computers have been around for three decades and people have been cutting and pasting all that time, it’s the sheer penetration and saturation of broadband that makes the harvesting of masses of language easy and tempting. On a dialup, although it was possible to copy and paste words, in the beginning (gopherspace), texts were doled out one screen at a time. And, even though it was text, the load time was still considerable. With broadband, the spigot runs 24/7.

By comparison, there was nothing native to the system of typewriting that encouraged the replication of texts. It was incredibly slow and laborious to do so. Later, after you finished writing, then you could make all the copies you wanted on a Xerox machine. As a result, there was a tremendous amount of twentieth-century postwriting print-based detournement: William S. Burroughs’s cut-ups and fold-ins and Bob Cobbing’s distressed mimeographed poems are prominent examples. The previous forms of borrowing in literature, collage and pastiche—taking a word from here, a sentence from there—were partially developed based on the amount of labor involved. Having to manually retype or hand-copy an entire book on a typewriter is one thing; cutting and pasting an entire book with three keystrokes—select all / copy / paste—is another.

Clearly this is setting the stage for a literary revolution.

Or is it? From the looks of it, most writing proceeds as if the Internet had never happened. The literary world still gets regularly scandalized by age-old bouts of fraudulence, plagiarism, and hoaxes in ways that would make, say, the art, music, computing, or science worlds chuckle with disbelief. It’s hard to imagine the James Frey or J. T. Leroy scandals upsetting anybody familiar with the sophisticated, purposely fraudulent provocations of Jeff Koons or the rephotographing of advertisements by Richard Prince, who was awarded a Guggenheim retrospective for his plagiaristic tendencies. Koons and Prince began their careers by stating upfront that they were appropriating and intentionally “unoriginal,” whereas Frey and Leroy—even after they were caught—were still passing their works off as authentic, sincere, and personal statements to an audience clearly craving such qualities in literature. The ensuing dance is comical. In Frey’s case, Random House was sued and forced to pay millions of dollars to readers who felt deceived. Subsequent printings of the book now include a disclaimer informing readers that what they are about to read is, in fact, a work of fiction.

Imagine all the pains that could have been avoided had Frey or Leroy taken a Koonsian tact from the outset and admitted their strategy was one of embellishment with a dashes of inauthenticity, falseness, and unoriginality thrown in. But no. Nearly a century ago, the art world put to rest conventional notions of originality and replication with the gestures of Marcel Duchamp’s readymades, Francis Picabia’s mechanical drawings, and Walter Benjamin’s oft-quoted essay “The Work of Art in the Age of Mechanical Reproduction.” Since then, a parade of blue chip artists from Andy Warhol to Matthew Barney have taken these ideas to new levels, resulting in terribly complex ideas about identity, media, and culture. These, of course, have become part and parcel of mainstream art world discourse to the point where counterreactions based on sincerity and representation have emerged. Similarly, in music, sampling—entire tracks constructed from other tracks—has become commonplace. From Napster to gaming, from karaoke to torrent files, the culture appears to be embracing the digital and all the complexity it entails—with the exception of writing, which is still mostly wedded to promoting an authentic and stable identity at all costs.

I’m not saying that such writing should be discarded: Who hasn’t been moved by a great memoir? But I’m sensing that literature— infinite in its potential of ranges and expressions—is in a rut, tending to hit the same note again and again, confining itself to the narrowest of spectrums, resulting in a practice that has fallen out of step and unable to take part in arguably the most vital and exciting cultural discourses of our time. I find this to be a profoundly sad moment—and a great lost opportunity for literary creativity to revitalize itself in ways it hasn’t imagined.

Perhaps one reason writing is stuck might be the way creative writing is taught. In regard to the many sophisticated ideas concerning media, identity, and sampling developed over the past century, books about how to be a creative writer have completely missed the boat, relying on clichéd notions of what it means to be “creative.” These books are peppered with advice, like “A creative writer is an explorer, a ground-breaker. Creative writing allows you to chart your own course and boldly go where no one has gone before.” Or, ignoring giants like de Certeau, Cage, and Warhol, they suggest that “creative writing is liberation from the constraints of everyday life.” In the early part of the twentieth century, Duchamp and composer Erik Satie both professed the desire to live without memory. For them, it was a way of being present to the wonders of the everyday. Yet it seems every book on creative writing insists that “memory is often the primary source of imaginative experience.” The how-to sections of these books strikes me as terribly unsophisticated, generally coercing us to prioritize the theatrical over the mundane as the basis for our writings: “Using the first-person point of view, explain how a 55-year old man feels on his wedding day. It is his first marriage.”  I prefer the ideas of Gertrude Stein who, writing in the third person, tells of her dissatisfaction with such techniques: “She experimented with everything in trying to describe. She tried a bit inventing words but she soon gave that up. The english language was her medium and with the english language the task was to be achieved, the problem solved. The use of fabricated words offended her, it was an escape into imitative emotionalism.”

For the past several years, I’ve taught a class at the University of Pennsylvania called “Uncreative Writing.” In it, students are penalized for showing any shred of originality and creativity. Instead, they are rewarded for plagiarism, identity theft, repurposing papers, patch-writing, sampling, plundering, and stealing. Not surprisingly, they thrive. Suddenly, what they’ve surreptitiously become expert at is brought out into the open and explored in a safe environment, reframed in terms of responsibility instead of recklessness.

We retype documents and transcribe audio clips. We make small changes to Wikipedia pages (changing an a to an an or inserting an extra space between words). We hold classes in chat rooms, and entire semesters are spent exclusively in Second Life. Each semester, for their final paper, I have them purchase a term paper from an online paper mill and sign their name to it, surely the most forbidden action in all of academia. Each student then must get up and present the paper to the class as if they wrote it themselves, defending it from attacks by the other students. What paper did they choose? Is it possible to defend something you didn’t write? Something, perhaps, you don’t agree with? Convince us. All this, of course, is technology-driven. When the students arrive in class, they are told that they must have their laptops open and connected. And so we have a glimpse into the future. And after seeing what the spectacular results of this are, how completely engaged and democratic the classroom is, I am more convinced that I can never go back to a traditional classroom pedagogy. I learn more from them than they can ever learn from me. The role of the professor now is part party host, part traffic cop, full-time enabler.

The secret: the suppression of self-expression is impossible. Even when we do something as seemingly “uncreative” as retyping a few pages, we express ourselves in a variety of ways. The act of choosing and reframing tells us as much about ourselves as our story about our mother’s cancer operation. It’s just that we’ve never been taught to value such choices. After a semester of forcibly suppressing a student’s “creativity” by making them plagiarize and transcribe, she will approach me with a sad face at the end of the semester, telling me how disappointed she was because, in fact, what we had accomplished was not uncreative at all; by not being “creative,” she produced the most creative body of work writing in her life. By taking an opposite approach to creativity—the most trite, overused, and ill-defined concept in a writer’s training—she had emerged renewed and rejuvenated, on fire and in love again with writing.

Having worked in advertising for many years as a “creative director,” I can tell you that, despite what cultural pundits might say, creativity—as its been defined by our culture with its endless parade of formulaic novels, memoirs, and films—is the thing to flee from, not only as a member of the “creative class” but also as a member of the “artistic class.” Living when technology is changing the rules of the game in every aspect of our lives, it’s time to question and tear down such clichés and lay them out on the floor in front of us, then reconstruct these smoldering embers into something new, something contemporary, something—finally—relevant.

Clearly, not everyone agrees. Recently, after I finished giving a lecture at an Ivy League university, an elderly, well-known poet, steeped in the modernist tradition, stood up in the back of the auditorium and, wagging his finger at me, accused me of nihilism and of robbing poetry of its joy. He upbraided me for knocking the foundation out from under the most hallowed of grounds, then tore into me with a line of questioning I’ve heard many times before: If everything can be transcribed and then presented as literature, then what makes one work better than another? If it’s a matter of simply cutting and pasting the entire Internet into a Microsoft Word document, where does it end? Once we begin to accept all language as poetry by mere reframing, don’t we risk throwing any semblance of judgment and quality out the window? What happens to notions of authorship? How are careers and canons established, and, subsequently, how are they to be evaluated? Are we simply reenacting the death of the author, a figure such theories failed to kill the first time around? Will all texts in the future be authorless and nameless, written by machines for machines? Is the future of literature reducible to mere code?

Valid concerns, I think, for a man who emerged from the battles of the twentieth century victorious. The challenges to his generation were just as formidable. How did they convince traditionalists that disjunctive uses of language conveyed by exploded syntax and compound words could be equally expressive of human emotion as time-tested methods? Or that a story need not be told as strict narrative in order to convey its own logic and sense? And yet, against all odds, they persevered.

The twenty-first century, with its queries so different than that of the last, finds me responding from another angle. If it’s a matter of simply cutting and pasting the entire Internet into a Microsoft Word document, then what becomes important is what you—the author— decides to choose. Success lies in knowing what to include and—more important—what to leave out. If all language can be transformed into poetry by merely reframing—an exciting possibility—then she who reframes words in the most charged and convincing way will be judged the best. I agree that the moment we throw judgment and quality out the window we’re in trouble. Democracy is fine for You-Tube, but it’s generally a recipe for disaster when it comes to art. While all words may be created equal—and thus treated—the way in which they’re assembled isn’t; it’s impossible to suspend judgment and folly to dismiss quality. Mimesis and replication doesn’t eradicate authorship, rather they simply place new demands on authors who must take these new conditions into account as part and parcel of the landscape when conceiving of a work of art: if you don’t want it copied, don’t put it online.

Careers and canons won’t be established in traditional ways. I’m not so sure that we’ll still have careers in the same way we used to. Literary works might function the same way that memes do today on the Web, spreading like wildfire for a short period, often unsigned and unauthored, only to be supplanted by the next ripple. While the author won’t die, we might begin to view authorship in a more conceptual way: perhaps the best authors of the future will be ones who can write the best programs with which to manipulate, parse and distribute language-based practices. Even if, as Bök claims, poetry in the future will be written by machines for other machines to read, there will be, for the foreseeable future, someone behind the curtain inventing those drones; so that even if literature is reducible to mere code—an intriguing idea—the smartest minds behind them will be considered our greatest authors.

In 1959 the poet and artist Brion Gysin claimed that writing was fifty years behind painting. And he might still be right: in the art world, since impressionism, the avant-garde has been the mainstream. Innovation and risk taking have been consistently rewarded. But, in spite of the successes of modernism, literature has remained on two parallel tracks, the mainstream and the avant-garde, with the two rarely intersecting. Yet the conditions of digital culture have unexpectedly forced a collision, scrambling the once-sure footing of both camps. Suddenly, we all find ourselves in the same boat grappling with new questions concerning authorship, originality, and the way meaning is forged.