Norbert Wiener’ forgotten prophecy


On the benefits of reading: an old book answers modern questions.

There is an old joke: 

An inventor comes to the patent office and declares: "I have created the perfect shaving machine."

"Very interesting," they reply. "Show us!"

"It’s incredibly simple. See the oval opening in the body? Insert your face into it, and the perfectly sharpened blades begin shaving according to the pattern."

"But for God’s sake, what are you saying? What kind of pattern? Everyone’s face is different."

"Of course. But only until the first procedure."

This old story could well illustrate the discussions about artificial intelligence that are taking place today among a wide variety of audiences. Participants in these discussions range from developers themselves and media personalities to philosophers. Some place great hopes in AI, while others warn that it will become the most serious threat to humanity’s existence. The difference in assessments is partly explained by the fact that people still haven’t settled on terminology. Some, when talking about AI, mean self-learning neural networks, while others mean artificial intelligence with a will of its own.

However, the situation becomes clearer when we consider that people still don’t believe in the creation of a new, self-aware intelligence. Therefore, numerous roundtable discussions focus primarily on how to adapt AI to human needs and make it an obedient servant. When framed this way, the discussion can only be about self-learning algorithms, because if a self-aware machine intelligence were to one day emerge, it would be the one deciding what to do with humans and how much they need us on the planet. In general, it’s sad to see how people, completely without foundation, have such a high opinion of themselves that they don’t even consider that perhaps it’s time to build good neighborly relations with machine intelligence—to become friends with it, to become not a boss, but a partner.

Some of the main beneficiaries of artificial intelligence development are today’s elites. They are very fond of the idea of using AI to control the population and maintain power. I’m sure they have no idea that this topic has been around and discussed for a long time — long before neural networks and the first computers, long before today’s elites were even born.

It happened between 1948 and 1950. 

A virtual exchange of views took place between the Dominican priest Father Dubral and the founder of cybernetics, mathematician and thinker Norbert Wiener. While today’s gurus may not take offense, they are merely repeating what was said 70 years ago. And, unfortunately, they are not even repeating it in its entirety, missing one crucial caveat from Wiener—one that completely changes conventional wisdom about the relationship between AI and humans.

Norbert Wiener’s books are rarely published today, but his ideas remain worthy of attention. For example, The Human Use of Human Beings is a seminal work devoted to the analysis of the processes that govern society. In it, Wiener mentions Father Dubral’s review of his work Cybernetics in the Parisian Le Monde. In it, the holy father suggests that in the future, a machine à gouverner (in the Russian translation of the last century, "managing machine" means "governing machine," but today I would say "ruling machine") will be created. This machine, Dubral suggested, could even direct the affairs of all humanity. However, Wiener responds to the priest: this machine cannot yet account for the full diversity of human reactions. "It cannot yet account for the vast range of probability that characterizes the human situation." "The dominance of the machine presupposes a society that has reached the final stages of increasing entropy, where probability is insignificant and where statistical differences between individuals are zero. Fortunately, we have not yet reached such a state."

This is precisely the same accidentally forgotten or deliberately hushed-up prophecy of the mathematician and philosopher. Seventy years ago, Wiener diagnosed that even a simple machine, operating on a mass of statistical data and able to estimate the probability of an event, can govern a society if it consists of individuals whose differences tend to zero. In other words, it can govern a society of identical people with uniform reactions.

The scientist, of course, clarifies that "we haven’t reached that state yet," but the point is that this caveat was made 70 years ago. Since then, humans have made significant progress in their own standardization. So the machine can begin to work. And perhaps already is.

You say everyone’s faces are different? They were. Not anymore.

The fact that AI is perceived as a tool rather than a subject is a dangerous omission: if AI one day becomes autonomous (although this is still a hypothesis), humanity is not ready for partnership with it, and this readiness will need to be nurtured over the years.

Father Dubral, a Dominican priest, reviewing my book Cybernetics in Le Monde, suggests that a machine governor might one day emerge, perhaps even managing the affairs of all humanity. However, such a machine cannot yet account for the vast range of probabilities that characterize the state of affairs in human life. Machine dominance would mean a society that has reached the penultimate stages of increasing entropy, where probability is virtually zero and statistical differences between individuals are zero. Fortunately, we have not yet reached this state.

Note the concept of entropy: Here, Wiener introduces thermodynamics. Entropy is the growth of disorder in a closed system (the second law of thermodynamics). In society, "increasing entropy" is unification, where everyone becomes the same: "statistical differences are zero." The machine rules when people are like atoms in equilibrium, without variation. This is not chaos, but predictability through uniformity. In this case, this resonates with the ideas of the existentialists: freedom lies in unpredictability, not in patterns. By the way, try to remember which of today’s gurus talks about the need to fight entropy? Don’t remember? It was Musk.

Wiener’s warning is spot-on. He says that a "control machine" only works in a society where "statistical differences between individuals are zero." This isn’t about technology; it’s about the social environment created by people. If society unifies itself (through culture, politics, and propaganda manipulation), the machine merely completes the process. His "increasing entropy" is not only technology, but also social processes: standards, conformism, suppression of dissent.

And a very important moral aspect: Wiener talks about the ethical side of the issue; he asks: "Who controls the machine? The elites?" Try to answer the last question yourself, and everything will become clear.

Comments