top of page

Machines of Loving Grace

Shantena Augusto Sabbadini


Machines of Loving Grace

(Richard Brautigan)

I like to think (and the sooner the better!) of a cybernetic meadow where mammals and computers live together in mutually programming harmony like pure water touching clear sky. I like to think (right now, please!) of a cybernetic forest filled with pines and electronics where deer stroll peacefully past computers as if they were flowers with spinning blossoms. I like to think (it has to be!) of a cybernetic ecology where we are free of our labors and joined back to nature, returned to our mammal brothers and sisters, and all watched over by machines of loving grace.


(Richard Brautigan)


In 2012 in Milan I gave a talk at a philosophy conference. My contribution was focused on the notion of abstraction as central to our culture, our science and to our relation with the world. It started thus:

I believe we need to revise some deep assumptions which are constitutive of our Western culture and through the process of globalization have become part of our world culture. We need to think in a new way, a new way that actually recovers (on a new level) a very old way…

The religion of our times, the dominant myth, the new cosmological narrative is the scientific world view. It is therefore crucially important to understand the basic premises of that view, its essential nature, which is simultaneously the source of its power and of its limitations. The root of both the power and the blindness of science is the process of abstraction which lies at the base of the scientific enterprise…


Eleven years later, I would like to pick up the notion of abstraction in the light of the rapid development of artificial intelligence (AI) we are witnessing, a development maybe humorously, maybe sarcastically, maybe prophetically illustrated by the Brautigan poem quoted above.


Abstraction is the basic ingredient of science, which was formulated in the most cogent way by Galileo Galilei. He said: “The book of nature is written in mathematical characters”. That statement could be taken as marking the birth of modern science. The infinite complexity of the actual world is impossible describe or control, and we humans are eager to control reality, faced with our frailty and the overwhelming mystery of existence. Therefore we have created science to provide us with models that reduce the infinite complexity to measurable quantities and allow prediction and control. Science is a world we can dominate (or presume to be able to dominate) and the source of the modern miracle that is keeping us all spellbound: technology.

Riding on its power we have created the modern world, with all its wonders and its disasters. A world that is presently reaching a culmination in which the machine itself in its pure abstraction as software, as computer code, as a sequence of zeroes and ones, is preparing to enslave us, its creators.


A world dominated by machines: that is the perspective AI confronts us with. Their computing power is growing exponentially toward a singularity, a horizon where the machine’s processing skills so vastly surpass our own as to be for all practical purpose infinite.


This might look at first sight as an exaggeration. After all the machine only performs the tasks we have instructed it to perform, doesn’t it? Not quite. It increasingly learns to rewrite its own code. We have a taste of it in the AI programs created to perform specific tasks, usually denoted as special artificial intelligence, as opposed to AGI, artificial general intelligence, the ultimate dream of AI developers, AI capable to solve any problem that might confront a human being. The playground to test AI programs and to compare their performance with that of humans typically consists of board games like go, chess or shogi (Japanese chess). In this domain AI is already well beyond the playing skill of any human. And, unlike humans, it does not require a lot of data, like chess manuals or historical games played. Its sole input is the game’s rules: the machine learns by playing against itself for just a few hours.


The defining characteristic of AI is precisely the ability to learn and improve, correcting mistakes and developing new strategies on its own. AI systems evolve.


The next major stage in this evolution, the one many AI developers are working towards, will be AGI, artificial general intelligence, applying the superhuman processing capabilities of AI to any problem whatsoever.


The recently released AI program ChatBot gives us a taste of what that may be like. ChatBot can produce excellent quality written or spoken text on any subject in a matter of seconds, can produce convincing arguments to support any thesis and can converse with a human in a way indistinguishable from a real person. This last challenge is the famous Turing test, proposed by Alan Turing in 1950: one might reasonably claim that programs like ChatBot pass the Turing test.


Such programs threaten to make school composition obsolete. But more importantly they may have a dramatic impact on politics. They are already capable to recreate the looks, the voice, the behavior of people in a manner indistinguishable from the actual person. Fake news is becoming a major political industry and can wreck havoc on the exercise of democracy. The last US elections were a hint of what might confront us in the future.


But all this is still relatively minor compared with what we can expect the impact of AGI to be. When, in a maybe not so distant future, AI skills will be made to bear on human problems in general, they will vastly surpass even the highest possible human intelligence. Then we will face a problem of a radically new order, nothing that our history and our evolution has prepared us for. We will have to share the planet with a species intellectually superior to us.


Most people are unaware of the possibility of such a problem. Some recognize it as possible, but believe it will be on us in a matter of decades or centuries. A few people, among them high level executives of major AI companies, openly state that the problem is already here and believe we will start experiencing its consequences in the next few years.


The marvelous little poem quoted at the beginning of this essay, All Watched Over by Machines of Loving Grace, was written by the American poet Richard Brautigan in 1967. It has the appearance of a techno-utopia, but it can be taken at face value as such or as an ironic and paradoxical provocation. Its charm consists precisely in the fact that it can be meaningfully read both ways.

The fantasy the poem outlines is that of a perfect AGI world, in which we are to the machines something like what our pets are to us today: lovely and clever in their own limited way, but definitely incapable to understand a mathematical equation, to appreciate the Goldberg Variations or to plan a year ahead. In an AGI world a similar gap would exist between our intelligence and that of the machines.


Therefore a question the poem implicitly raises is: if we will be like dogs to the AGI machines of the future, will they be kind masters to us? Will they be ‘machines of loving grace’? Or will they simply find us a nuisance and eliminate us?

An even more fundamental question concerns the ‘sentience’ or ‘consciousness’ (I will use these two terms interchangeably) of the machines. Will the superhuman machines be sentient? Will they have experiences? Will there be something it is like being them?

Immediately we tend to answer those questions in the negative. We are ready to grant consciousness to animals, perhaps (for some of us) to plants, but certainly not to a table or a chair, nor by extension to an electronic circuit, however complex.


But as a second thought that immediate rejection may not be so obvious. In fact, we generally assume that sentience does not reside in the hardware of our bodies, but is rather, in some sense that we are still far from understanding (that is David Chalmers’ ‘hard problem of consciousness’), a functional attribute, belonging to the software rather than to the hardware.

If that is the case, it should not make a difference whether the software of consciousness runs on a silicon based hardware rather that a carbon based one. In that perspective it seems quite reasonable to conceive the possibility that, under certain circumstances, even a human artifact may be conscious.


In a previous work[1] I have equated life with the emergence of quantum indeterminacy on a macroscopic scale. Such a definition is compatible with the notion of a machine being alive, if it satisfies the emergence of quantum indeterminacy requisite. One could then assume that all life, however it comes to be, possesses some degree of sentience, machines included.


What would a machine’s sentience be like? Would a machine experience emotions? Would it dream? Would it have a sense of self? We are far from being able to answer such questions.

Let me only remark that, if my interpretation of life as emergence of quantum indeterminacy is correct, the behavior of an alive machine has an element of true and intrinsic unpredictability. The same unpredictability is there in us and we call it ‘free will’. We do not know how the machine would call it, but in the substance we would not be so different.


A question that has been prominent in the discussions on AI is the danger that a generation of superhuman machines may represent for our survival and our future as a species. A number of AI experts have spoken out on that, and they have recommended halting the development of more powerful AI systems until we have a more solid understanding of the risks involved.


Wise words that do not seem to be followed by wise actions: the AI development race is going on full speed. The reason is obvious: if I slow down or stop, my competitors will take advantage of that and my business will suffer. The agreement not to divulge the code for powerful new developments has already been violated in many ways (ChatBot is an example). While the efforts to implement ‘conscious AI’ are noble and ethically correct, we can hardly expect them to be successful.


The key issue is often spelled out like this: can we make sure that the superhuman machines we create will have our best interests at heart? We cannot, one reason been the competition factor mentioned above, another the fact that AI machines will eventually program themselves, largely overwriting their initial programming. Any ethical code embodied in the original programming of the machine will evolve in unpredictable ways in the following generations.


Yet, while there is no warranty that the machines of the future will behave ethically by human standards, there is also no reason to assume they will be especially nasty. We might take Brautigan’s poem as inspiration in that respect. And our acting ethically and consciously, right now and here, is our best chance for shaping our future as a species.


Therefore let us pause and look back to our past. It all started with “the book of nature is written in mathematical characters”: the power of abstract thinking has been growing ever since. But that is not all there is to the ‘book of nature’.


‘Abstraction’ comes from the Latin abs-trahere, taking out, pulling out. What we have been extracting from the context of life are our models, our mathematical symbols, our measurements, our techne. Practically useful, but… The point is that we have forgotten the abstracting, we have forgotten the foundation of our science. We have come to mistake our models for reality tout court. In Brautigan’s fantasy we have forgotten the deer and the pine trees and the mammal brothers and sisters. We have locked them inside our computers, we have reduced them to abstractions.


That is the root cause of our troubles. We need to go back to the whole before the abstraction, back to the rich tapestry of meaning of this universe we inhabit. The computers are OK when understood and used wisely. There is no way to eliminate them: let us infuse them with our higher self right now and wish for the best.

[1] Pilgrimages to Emptiness, Pari Publishing, Pari, 2017.

bottom of page