The Search for Posthumanism

The idea that we can run out of time is peculiar. It’s a product of how we organize our memories.

Human consciousness is a kind of romance with the idea that time is finite and consumable. This assumption of finitude means that time can also become digested and metabolized urge, energizing the desire to imagine what is coming next. Being able to organize the past into a semicoherent system, we extrapolate forward and read ourselves into a specific future. We make predictions: Moore’s Law tells us the size and cost of microprocessors diminish every 18 months. Polling reminds us the United States prefer to re-elect their presidents during wartime. The Super Bowl favorite wins three out of four times. It has been written, and so it shall come to pass.

In the opening keynote of the Singularity Summit, Ray Kurzweil, inventor, writer, and immortalist, spoke about the looming end of prognostication. By his best estimate, the Singularity — the moment when our predictive mechanisms are overwhelmed by superintelligent computers that surpass the understanding of any one person — will happen in 2029. This will wipe clean all the fantasies and modeled futures we made for ourselves. Our ability to predict our personal destiny will vanish; in its place we will have the strange sensation of falling through the floor of our own life.

The Singularity tells us that the future is not a truth we can discover, but merely a theater for our private melodramas. Mom and dad are going to die. I’m never going to be an astronaut. Oh my god.

* * *

After his presentation, Stephen Wolfram, the theoretical physicist and creator of Wolfram/Alpha — an internet-based project to translate all knowledge into computations — accepted questions from the audience. One came from a man who identified himself as belonging to an organization called Saving Humanity from Homo Sapiens. “I almost can’t phrase my question,” the man said, stammering into the microphone.

This is the definitive statement of posthumanism: I no longer know what the question is.

* * *

Humans are the only species that can transcend the limits of their own biology, Kurzweil pointed out. But will this tinkerer’s transcendence change us in essence? Are we on the verge of a self-engineered evolution?

In How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Infomatics, N. Katherine Hayles defines posthumanism through reflexivity: “the movement whereby that which has been used to generate a system is made, through a changed perspective, to become part of the system it generates.” Homo sapiens are subsumed by their own complex systems, written to run increasingly powerful computers. In accordance with Moore’s Law, as these computers shrink in size, their ability to communicate with one another independently of us grows. They will both move within us and manipulate the conditions outside us.

* * *

John Searle famously criticized the idea that computers could ever have intelligence with his Chinese Room argument.

Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.

Computers rely on syntax. They can order and arrange symbols, but only humans have true understanding of what those symbols mean. Human minds operate in semantics. We can both order symbols and interpret their meaning, connecting them with a billowing curtain of sensorial, emotional, and academic denotations.

By swapping “neurotransmitter” for “computer” in the Chinese Room Argument, Kurzweil produced what he considers a powerful argument for why the human brain can’t know anything. “These are all assumptions,” he said. “They’re leaps of faith.” One could argue Searle’s concept of semantics is only syntax operating on a macro scale. Interpreting a symbol is a matter of checking it against the all the possible meanings it might have and then assuming the one with the highest probability is true given the circumstances. Our body is not a machine then, but a vast network of machines: skin, eyes, eardrums, noses, muscles, stomachs, lungs, and more.

The brain must understand all these systems. Its work trickles upward into our conscious lives as strangely divergent phenomena: memory, posture, sneezing, hormones, hunger, dreams, breath.

The brain is also easy to trick. Surround one color with certain other colors, and we’ll see it as brown. Remove the surrounding colors and it becomes red. Should we commemorate this semantic vulnerability in our computers, reminding them of how easily and often we got things wrong?

* * *

The advances made in the two decades since Searle’s argument have brought us closer to a point where computer intelligence can mirror human intelligence. This year, IBM’s Watson — a computer behemoth consisting of more than 2,800 processor cores capable of processing 500 gigabytes of data per second — won a game of Jeopardy! by a factor of three against two of the quiz show’s long-running champions. Watson’s most remarkable quality was its ability to combine syntactic and semantic intelligence. “That’s the exciting thing about Watson,” Kurzweil said. “It got the information in the same way a human would.”

Some conspiracy theorists believe that Watson just queried the internet for answers, which isn’t a terrible analogue for how humans retrieve dormant memories from their brains, following the neural pathways that open quickest and most responsively when asked who was president during the War of 1812 or what the symbol for strontium is. At the Singularity Summit, David Ferrucci, the lead manager of the Watson project for IBM, described the supercomupter as a kind of reverse magnifying glass, capable of extrapolating a reasonable assumption of meaning from a large body of heterogeneous data. It had several different kinds of intelligence systems working in concert with one another to parse humor, word play, disambiguation of words, and more.

Ferrucci and his team didn’t begin building Watson by trying to model human intelligence in general; they only wanted to build a computer that could answer a Jeopardy! question. Yet strong parallels between the brain and computers emerged the further they progressed, blurring the distinction between systemic intelligence and consciousness. Watson’s achievements show that computers can think in ways that satisfy both the syntactic and semantic conditions of Searle’s Chinese Room test. Taken as a starting point, the process that created Watson’s intelligence can be thought of as a fertilized human egg beginning a directed process of cell reproduction. The silicon zygote is is not yet conscious, but it is replicating itself.

* * *

Treating the human body as a series of overlapping but discrete intelligences has dramatically extended human life expectancy. Sonia Arrison, a senior fellow at the Pacific Research Institute, argued that the first person who will live to be 150 years old has probably already been born. Today, many of our presumptions about age are being challenged, whether it’s the 70-year-old woman who gave birth or the 100-year-old Indian man who ran a marathon. The average life expectancy in Monaco, among the wealthiest countries in the world, is 90.

In the past century, we started vaccinating infants, developed safe motherhood techniques, learned more about nutrition, and developed hygiene standards to keep at bay pathogens that can overwhelm still-developing immune systems. As we did, life expectancy increased, revealing new categories in a person’s life span. We learned to luxuriate in the adolescent wonders of first kisses, notebook poetry, and junior-varsity sports dramas before choosing a careerist yoke to tie across our shoulders. Today we have discovered yet another new middle ground between adolescence and career, a period of wandering and experimentation in one’s 20s, part social entrepreneur, part artist, and part servant of the absurd, a condition we allow ourselves to sneer at when formulated as the slur hipster.

* * *

One of the correlates of increased longevity is an growing preoccupation with religion. The longer we live, the more time we have to ask why. And the more we ask, the less we understand. So we create calming myths about the insistently unknowable material of our lives.

We misapprehend religion when we consider it an attempt to burrow into the history of the universe and learn the secrets of creation. It is a forward-facing romance of the emerging metasystems we are merging with, be they subatomic or cosmological.

* * *

There is a special tendency among humans to mistrust the systems they create, to view them as inferior because they lack consciousness. Humans are innately prejudiced to assume the things we observe in the universe are simple while we ourselves are irreducibly complex, Wolfram claimed. He argues that computations are irreducible, not consciousness. This can be seen in one of his pivotal discoveries, cryptically known as Rule 30. Rule 30, a series of basic instructions for cell behavior based on the behavior of neighboring cells, shows that unexpected sophistication very often comes from simple rules. Rule 30 has surprising analogues in nature. When implemented, Rule 30 produces a pattern similar to that found on the shell of the sea snail Conus textile.

Software bugs, too, are another example, Wolfram says: simple lines of code with far-reaching and complex effects on systems that their creators cannot predict in advance. Bugs are not mistakes but forerunners from unconsidered systems that sometimes intrude into the theoretical vacuum.

The idea of bugs is often traced back to the Mark II, a computer built at Harvard in the 1940s under the direction of the U.S. Navy. During a malfunction one of the researchers found a moth trapped in one of the computer’s relays. The story is a legend, and, as such, apocryphal. It was worth commemorating not because it was a new way of describing dysfunction, but because the word bug had already been in use for decades. Here, finally, was a literal example of it.

Thomas Edison is among the first to have used the word bug to mean spectral error, more than 60 years before the Mark II story. No one expected his glib euphemism to be literal, until it became literal. Given enough time, our metaphors transform into truth, miracles of random chance, which dim the history that precedes them.

* * *

It’s believed that as dogs split off from wolves and became domesticated, they learned to mirror human body language to communicate with us. We tilt our heads, they tilt their heads. We hold out our hand, they hold out their paw. When they get it right, they get a treat, and the cloud of interpretative anxiety departs.

Years from now, what we think of as computer will look on our efforts to work out logic problems with the same paternalistic appreciation we feel when dogs stop to inspect a promising pile of trash on the sidewalk, hoping to find in it something meaty. Dogs never learn the answers to their instinctive questions of hunger. They can’t solve their pack hierarchy, their mating compulsions. Yet they find a partner-owner who spares them from grappling with such questions. The dog is happy to have us, and our lives are happily enriched by the dog. We remind one another that, no matter how carried away by the mysteries of thought we become, we are enriched by the connection between two otherwise disparate systems of living.

* * *

We’ve already begun to taxi to the gate in a strange new afterworld. Surrounded by systems that work by seemingly magical properties, whose operations are known only to a small number of pedestaled polymaths — who knows how the thick miles of cable buried underground transmit the laughter of loved ones in a video-chat session? Or how satellites translate the movement of particles through space into television shows? What guides the antagonism of traffic lights? The alchemical conjuring of fossil fuels that give them life? Does anyone really grasp the concept of a million people? A statistical placeholder stitching together a question and answer across deficiencies of our brain — a means by which we agree to simply not think about it anymore. Let us spend our time on something else.

* * *

When our lives are no longer overburdened with the need to find patterns in the past to build future expectations, and we have passed the existential baton to entities better equipped to deal with problems of four dimensionality, cosmological order, and moral administration of societies and markets, will we not recede again into humanism?

And were our brains capable of understanding such issues, would we cease to be human? Is the “human” anything more substantial than a question mark set in motion across time and space? These questions have a way of running away with themselves. They have a mind of their own, it sometimes seems.

“And we should stop here,” Wolfram said, abruptly realizing he had spent all his allotted time, leaving behind what appeared to be a great, unordered mess, a mountain of starry cryptograms, its distant summit our future, which every additional question moves further and further away.