Trespassing Horizons

Reflections on cybernetics as an ideology of organized oblivion

So weary from the bars swept endlessly across it,
his gaze holds 
nothing more. For him ten thousand
bars exist and behind the bars, no world.
--Rainer Maria Rilke,
“The Panther” (translated by H. Bolin)


Our friend Sandra grew up near Frankfurt, Germany, and came of age in its antinuclear movements. Sandra was involved in radical struggles from the time of her youth in the ’80s. A committed anti-fascist and anti-capitalist, she was never satisfied with the positions, habits, and limitations of the many single-issue struggles within leftist scenes she encountered in her many years living in squatted houses in Berlin. She researched supposed certainties and broke up scene categories that were far too narrowly defined. As a social revolutionary acting internationally, she connected many left-wing struggles, never letting her efforts be reduced to her longstanding passion for the (militant) anti-nuclear movement. Her comprehensive criticism of technology and the idea of progress led to an investigation of cybernetics and critical-philosophical reflections on the fundamental relationship between man and nature.

Sandra left Berlin late in her life for the plateaus of the French countryside. While she was writing the following text, her health worsened. As a result, the text remains incomplete, or interrupted; in other words, it was not possible to clarify all textual uncertainties within the text. These departing words on the fate of the future, some dictated to a friend in her final moments of lucidity, fulfill her wish to share her thoughts with friends one final time. Unfortunately, she died of cancer much too early, in June 2019. The following text was translated from the original German into English.
— H. Bolin, translator

Why is it always so difficult to discuss the newest technological developments without listing the signs of the coming end of the world or seeing progress as fixed inevitably along a linear course? It is alarming that the question of what a better life would be has been forgotten, despite the fact that we live in an obviously destructive system, whose only future is no future. How can we shake off our deep-rooted fear of the horizon’s edge and strive to make the future turn out differently?

Like children, we are tossed back and forth between facts and fakes. We think of ourselves as geniuses in one instant and as fools in the next. We are stubborn in our defense of ourselves as autonomous individuals who use tools rationally to build our own little worlds, yet who are deeply concerned about the prospect of machines becoming smarter than their creators. We are faced with a new normality that overwhelms even the most enthusiastic smart-asses.

At some point in history, Western culture began to equate time with chronology. Time was conceived as a path that led in a straight line from the past up to the present and into the future. During the industrial revolution in the 19th century, a period characterized by enthusiasm for progress and the glorification of modernity, people had great hopes for the future. The future was interpreted as an improvement on the present. After the First World War put an end to belief in the final triumph of reason, and after the Second World War, with its thoroughly technologized disasters, made belief in progress impossible and the last of the old gods retreated, the technocrats were the only ones left to determine the order of things. Today we appear to have by and large inured ourselves to a series of their fundamental assumptions, which paralyze us. This makes all of us — both critics and protagonists of the new, intelligent world — seemingly incapable of avoiding thinking about the world in a quantifying logic and of not translating everything into numbers ourselves.

During the cold war, when all political debates in the West were drowned in the either/or of anti-communism, all technocrats were unanimous in recognizing that liberty was only to be found in proposals that distanced themselves from political positions and proclaimed themselves to be free from ideology. During this period, Norbert Wiener and his prospective calculation circle distilled the principles for cybernetics while working on a ballistic-missile program. Cybernetics became a model for how to steer Western democratic societies and was integrated into the administration of the economy — both increasingly becoming the same.

Cybernetics, having weathered an intense public debate between the 1950s and 1970s consisting of everything from celebrations of a calculable future to thoughtful criticisms that anticipated the coming technocratic regime, tightened its reins in the years that followed, but it did so quietly. In the discussions of automation in the 1980s, some of cybernetics’ foundational implications began to disappear from view. Awareness of these implications only reemerged once the Internet and its extensive virtual reality made them tangible again. Suddenly people noticed that something was missing in their lives. But what? If we want to avoid simply blaming that lack on the artificially created fetish for technology, or merely staring at the problem the way a rabbit stares at a snake, then we must look for an answer along the path traversed. We must investigate the chosen direction and the concrete steps; we must probe what has-become [das Gewordensein] for its ideas and its development. Since directed self-organization is the foundational principle of cybernetics, the first step is to make a differentiation between direction and leadership. This text is an attempt to demonstrate this difference by engaging in a debate that took place before World War II but implicitly anticipated its end.

Six years before the first Macy Conference (a series of conferences beginning in 1946 in New York City aiming to build a common language among different scientific disciplines), which aimed to make the concept of cybernetics socially acceptable, an interdisciplinary symposium called the Conference on Science, Philosophy and Religion took place in New York. The debate I am interested in took place at this conference. It was organized by nine social scientists that were part of an entourage around the conservative thinker Louis Finkelstein. The group held its first symposium in 1940 as what they called “an effort to face the crisis in our culture by an experiment in corporate thinking.” Finkelstein published a piece called “The Aims of the Conference” in which he claimed the kernel of this crisis was to be found in an “intellectual confusion” that was just as pressing as, if not more pressing than, “the totalitarian way of life [that] is rapidly spreading through the world, to the imminent peril of civilization.” The conference’s stated goals can be understood as an early version of contemporary thinking about extremism, in which the threat of democracy is falsely attributed to its critics. Despite their vague formulation, it is clear that it was not the fascists whom they feared the most. Finkelstein continued:

For more than two decades public opinion has been subjected to the powerful propaganda of a few articulate opponents of our democratic institutions. Their influence . . . has been enhanced by the temper of the general literate public, and by the number of writers and teachers whose self-defeating skepticism has played directly into the hands of the totalitarians.

The idea of realizing “a unity in thought and effort . . . to build more secure foundations for democracy” amounts to an open debate in a closed circle, since the concept of democracy was equated with the American way of life. Even if its influence shouldn’t be overestimated and even if there are no direct references to economic aspects, the conference was a breeding ground for the invisible hand that in turn steers a self-regulating unit. The famous anthropologist Margaret Mead coined the term “second-order cybernetics” in 1968 to describe this process. The term was then used by her research companion Heinz von Foerster, who used his cybernetic jargon to glorify the way managers acted during his late phase of creation in the 1980s, as neoliberalism blossomed wildly on the breeding ground of principles of technocratic self-regulation.

Heinz von Foerster was an emblematic figure of the technocratic chameleon: During the Second World War he developed radar for the Nazis, only to later change political flags and sail to America. He carried out research assignments for the American army till the end of the 1960s, and subsequently supported critics of the military and counterculture figures in their demands for more self-regulation and autonomy. Last but not least, he advised managerial schools. While Heinz von Foerster was busy peddling the principles of cybernetics, Margaret Mead quietly devoted herself to injecting the anthropological knowledge that she had acquired in her fieldwork into the CIA’s colonial logic.

Margaret Mead: Humility in the Face of the Cultural Forces

A follow-up to the aforementioned symposium was organized in 1941. This time Margaret Mead was invited to speak. In her contribution titled “The Comparative Study of Culture and the Purposive Cultivation of Democratic Values,” she explored how her anthropological position could “affirm a faith and attempt to lead a whole civilization in a given direction.” The faith to be affirmed here refers to cultural relativity. Influential in North America, this was the belief that “every item of cultural behavior [should] be seen as relative to the culture of which it is a part.” Her emphasis on the “systematic interrelationship of different cultural elements” can be understood as an attempt to deal with the social tensions at hand by shifting the focus from the incompatible goals of different components to their problematic relationship, so as to then weigh the costs and benefits of possible decisions. Margaret Mead’s example of choice — “the question of compulsory sterilization of the unfit, seen as a measure to save the community the expense and social waste of a large subnormal population” — prevents us from assuming that she had open-minded and peaceful views.

After examining the anthropologist’s role “of relating disparate items to whole systems, utilizing the comparison of one system with another . . . issuing warnings and pointing out the implications of various changes or trends . . . to lead and change [them] in the direction of greater democracy,” Mead focused on the main point of conflict:

If however we push the question one step further and say: “We have established the direction in which we want to move. Now you social scientists, specialists in culture, tell us how to get there. You implement our spiritual program for us!” have we then reached a point at which freedom of the individual will and scientific procedure clash? Does not the implementation of a defined direction call for control, and does not control by its very existence invalidate democracy?

Mead then pointed out that merely enforcing obedience is insufficient for generating individual moral responsibility among the population. In order to bypass this problem, scientists must first reflect on their own situation as planners and executors. They are part of the whole. And further:

The dilemma which must be squarely faced . . . is that implementation can never take the form of finished blueprints of the future, but must involve direction, an orientation of the culture in a direction in which new individuals, reared under the first impetus of this direction, can, and will, take us further.

Shifting the focus from static result to dynamic process corresponded to what Mead noticed in the seeming impossibility “of envisaging the end toward which the scientist is setting that process in motion.” However, this did not reduce the scientist’s determination to “control the processes of the peculiar nature of his own culture.” It was not on ethical grounds that Mead ruled out “a finished blueprint of an absolute desirable way of life” achieved by “the ruthless manipulation of human beings.” Given culture’s inevitable dependence on its transmission through generations, direct manipulation proves to be dysfunctional, since “the victims of such a process become progressively more apathetic, passive, and lacking in spontaneity. The leaders become progressively more paranoid.”

Today we recognize her thoughts in our reality. But, as always, things have developed in a slightly different way. What we are experiencing is a form of control whose direction is almost invisible, and yet this direction is still carried out with the most outrageous methods of manipulation, which creates precisely the effects of a narrowing of the horizon described above. Perhaps this is only made clear to us since these steering methods have been integrated into social engineering more generally.

However, it seems that the full consequences of Mead’s thoughts were not understood during her time. Even her husband — the same Gregory Bateson who later became known for using the cyclical models of cybernetics to understand the functioning of reason and the psyche, which in turn would form the basis of self-directed therapy à la R. D. Laing, the Scottish psychiatrist — interpreted the strategic conclusion she drew from multigenerationality as a weakness of her thought. Mead’s considerations can be summarized in a triad: First, feedback loops function over generations, making any planning in the present difficult. Second, this means that one individual can never fulfill the overall plan (represented by the finished blueprint), regardless of their determination and influence. And third, realizing this plan in practice then does not require manipulation of the individual but rather manipulation of the conditions that determine their potential for conforming to the desired model.

Gregory Bateson: Answer to an Unasked Question

In his commentary on Mead’s conference paper published in Science, Philosophy and Religion: Second Symposium, Gregory Bateson, an anthropologist himself, stated that the conflict at hand was one between democratic and instrumental motives, a “life-or-death struggle over the role which the social sciences shall play in the ordering of human relationships.” In this comment, Bateson anticipated the integration of science into politics that profoundly shaped the cold-war era. However, the way he redefines Mead’s argument so as to completely ignore the intergenerational aspects of time she developed almost seems cliché, and results in a completely different conclusion:

She states perfectly clearly that this new shift in the emphasis or Gestalt of our thinking will be a setting forth into uncharted waters. We cannot know what manner of human beings will result from such a course . . . . Dr. Mead can only tell us that if we proceed on the course which would seem most natural . . . we shall surely hit a rock. She has charted the rock for us, and advises that we embark on a course which does not lead to the rock; but in a new, still uncharted direction.

While he follows the “different ways of perceiving sequences of behavior,” he reinserts it back into the psychological framework of individual learning. The temporal horizon is reduced to one generation, to the question of how today’s children should behave as adults and what we can teach them today, in the early 1940s. It is striking that Bateson’s ideas echo the kinds of thinking that emerged from the laboratories of psychologists who studied learning in dogs and pigeons, despite his declared distance from the theory of behaviorism. The individual was put at the center while the environment was ignored. By implicitly letting the audience forget the problem of cultural integration that underlay her proposal in the first place, he reduces the scope of Mead’s argument.

Bateson refers to Mead when he distinguishes between “social engineering,” manipulating people in order to enact the blueprint of a planned society, and the ideals of democracy, the “supreme worth and moral responsibility of the individual human person.” Not only does this sharp contrast give the impression that Mead’s contribution should be read in a predominantly moral manner, it changes the argument’s function. While she uses it to orient herself toward a process and not a goal, he uses her argument to link social engineering (a phrase that she had never even used) to direct manipulation. He immediately resolves the ethical difficulties. Manipulative techniques already exist, and the best way to prevent them from being abused is to reserve the rights to these techniques for people with noble intentions:

Are we to reserve the techniques and the right to manipulate people as the privilege of a few planning, goal-oriented, and power-hungry individuals, to whom the instrumentality of science makes a natural appeal?

Bateson uses the example of National Socialism to talk about manipulation, and his idea of charting “the mind’s habits” in order to protect people from abuse must be understood in this context. Apparently, it is better for scientists to evaluate and steer human actions based on their seemingly rational principles. Who would then second-guess the need “to get something better than a random list of habits”?

Dr. Mead tells us to sail into yet uncharted waters, adopting a new habit of thought; but if we knew how this habit is related to others, we might be able to judge the benefits and dangers, the possible pitfalls of such a course. Such a chart might provide us with the answers to some of the questions which Dr. Mead raises. . . . the cardinal points, if you like—upon which the final classification must be built.

Isn’t the proposal for a final classification exactly the opposite of what Mead considered possible? In my reading, Bateson tried to negate the actual foundation of her argument, where she writes: “the realization that were the world we dream of attained, members of that new world would be so different from ourselves that they would no longer value it in the same terms in which we now desire it.” Finally his proposal was not to merely find out about habits, how they are learned and in what kind of cultures they are formed. He chose a more proactive approach instead:

Inversely, we may be able to get a more definite—more operational—definition of such habits as “free will” if we ask ourselves, “What sort of experimental learning context would we devise in order to inculcate this habit?” “How would we rig the maze or problem-box so that the anthropomorphic rat shall obtain a repeated and reinforced impression of his own free-will?”

What is announced here folds back upon itself: After Mead had raised the question of how to deal with the fine line between cultivating values and manipulating society, Bateson rejected her analysis, adjusted the focus to the process itself — in order to then draw up a map of the desired output with guidelines on how this process could be operatively controlled.

First Conclusions: The Machine that Inhabits the Machine as Its Spirit

From our situation today, we tend to overestimate the technical side of cybernetics by locating its foundations in the robot dreams that transformed into the web’s reality. In doing so, we lose sight of the fact that in the phase in which cybernetics took shape, aspects of the future society’s plannability were considered to be equally important. Rather than offering a clean academic analysis of the genealogy of cybernetics or dividing history into a pre- and post-, considering only the in-between as essential, I have chosen two texts to illustrate the methods of cybernetics.

One of cybernetics’ means is the attempt to orient current efforts in the direction of a future goal. By now, cybernetics has completely permeated our mental and material structures, and has done so not least under the guise of management consultancy. This method became known to us under the name of futurology or future research. It was developed as a means of arms control during times when there was a threat of nuclear war. It was refined into a research instrument used in social engineering to avoid capitalist crises shortly thereafter. Some of the conclusions drawn from cold-war hysteria were used to this end: As early as 1961, Louis Armand and Michel Drancourt predicted that after the second phase of the industrial revolution, which promised an era of abundance, ideologies would become as obsolete as the economic and political structures of their time.

According to Herman Kahn, perhaps futurology’s most influential and repulsive figure, the aim of futurology was to produce a “projection free of contradiction.” This was not enough for the French novelist and politician Jacques de Bourbon-Busset, who wrote in an article called “Réflexions sur l’attitude prospective” that he did “not want to predict a probable future, but to prepare a desirable future, and maybe to go even further: to make efforts to render that desirable future probable.” This is the point at which Bateson’s proposed focus on practical application becomes a reality in the common understanding of futurologists. Their task is neither to explain social contradictions nor to suggest solutions but only to show how these problems can be taken over by events. It concerns how facts can be produced in the first place and presented as facts.

Hasan Özbekhan, planning director of the System Development Corporation, had a vision of using databases to calculate future situations, and thus of “constructing anticipations by manipulating future situations backwards, in order to see . . . which changes must be undertaken to achieve the anticipation.” As he claims in The Idea of a “Look-Out” Institution, this would turn futurology into “a planning method which uses the future as an operational tool . . . to put into motion the conceptualized future.”

Futurologists were often people with good intentions who wanted to make wars less likely — or at least less brutal. They believed that capitalism would change its course as soon as its dynamics of plundering were understood. While these intentions were soon forgotten, the logic and technologies remained. Forecasts are still used as tools to shape the present today. Perhaps futurologists were incapable of foreseeing how comprehensive their transformation of values would become. As true futurologists, they were already living in the blind spot they helped create by not recognizing that the “end of ideologies” is an impossibility for humans. This goal cannot be achieved, only forgotten, since the absence of values is also a value. If the machine is not controlled by humans, the machine itself becomes the direction — and as a consequence the desire to survive takes a counterproductive turn. In my opinion, in order to find a more interesting way, we should allow ourselves something beyond an individual perspective — so as to not only stare at the black mirror but trespass it together.

Cybernetics not only reduces the world to its model but also shifts our perception so as to make any differentiation between world and world-model superfluous. Like panthers we pace back and forth behind bars, knowing that the world lies beyond them. In fighting for more autonomy, this differentiation between world and model is fundamental, because without the thirst for going beyond, for the beauty of the universe — a thirst that arises from the sudden surprise of being alive amid a whole range of other living beings — we won’t be in a position to take the small step to the side that’s necessary to get out and put our feet on solid ground. That is why I see cybernetics as an ideology of confusion, or worse, an ideology of organized oblivion. The basic problem is not steering, but our fear of trespassing the horizon and stepping into futures that are unknown.