Chapter 16
Epilogue

There is a wide consensus that trying to build an AI teaches us a lot about what human intelligence is about: an AI works as a model of the human mind. I think it is the same for suffering. For sure, a model is not the same as the real thing; some things are always missing. You cannot actually drive to work with a computational model of a car; mathematical equations of physical forces and chemical reactions written on a piece of paper do not actually make your car accelerate. Yet, it is such models that enable the construction of cars and even rockets that fly to the moon.

A good model can tell us a great deal about the real thing, and thus help science understand how a complex system works. A model can also enable us to predict what the system does in the future, for example, by providing a weather forecast. But from the viewpoint of this book, what really matters is if the model is predictive in the following narrow sense: Does it enable us to predict what results interventions have on the system? That is, does it help us in changing the system in some way we find preferable?

This book proposes that computational models of human suffering can tell us what kind of processes are necessary for suffering. The AI models in this book explicitly showed us some of the conditions, causes, and processes that have to be operating in order that suffering arises. That means we can develop methods that will reduce suffering: We simply need to remove the necessary conditions, or, at least, make them weaker. This is why I think the models in this book are useful, and the later chapters of this book were, in fact, all about methods to reduce suffering.

It is possible to argue that an AI or a robot cannot really suffer since it is not conscious. In other words, the computational processes considered in this book may not be sufficient for suffering if one insists that suffering must be conscious. However, that is beside the point if our main goal is to develop methods that reduce suffering. Actually, some even claim an AI is not really intelligent—according to some stringent conditions for intelligence—yet AI is not only capable of performing some very useful practical tasks, but it has also greatly advanced human neuroscience by giving insight to the computations performed by the brain.

The interventions I proposed were mostly identical to what existing philosophical systems propose, while I showed how to motivate them using current AI theories. The theory in this book will hopefully be complemented by further research; I think this is just the very beginning of a long-term scientific endeavour. I hope it will lead to more and more efficient interventions in the future, including completely new kinds of interventions.

I certainly do not claim that the theory in this book would be either complete or perfect. In particular, there are quite probably mechanisms of suffering which do not fit into the framework of this book. That may be the case, for example, for suffering due to certain kinds of social emotions, or existential suffering such as lack of meaning of life. The theory in this book also attempts to explain all kinds of suffering—including self-needs, uncertainty, uncontrollability, negative emotions (such as fear and disgust), and stress— by the single mechanism of frustration. Whether such a theory based on a single mechanism is satisfactory remains to be seen in future research. For example, some interpretations of Buddhist philosophy further maintain that desire and aversion in themselves are suffering, and it is not quite clear how that fits the framework in this book.1 As always in science, the theories could even be rejected, at least partly, as science progresses.

Summary: Limitations of the agent lead to errors and suffering

To recapitulate the book in a few paragraphs: we saw several ways in which the limitations of the agent and its intelligence lead to suffering. We can succinctly summarize the main problems as uncontrollability, uncertainty (or unpredictability, or impermanence), and unsatisfactoriness (including insatiability and evolutionary obsessions). The agent cannot control its environment as much as it would like; it is not able to perceive or predict the world with much certainty; it strives endlessly at questionable goals which in humans are given by evolution. Such an agent can never find satisfaction.

We saw that the inability to control the environment is partly due to the limitations of the agent’s physical body and any other means it may use to manipulate the environment. Yet, to a large extent, it is also due to the agent’s limited computational capacities and limited data. If there has not been enough data to learn from, the agent cannot have a good model of the world; that is, it cannot understand the regularities of the world. Even if the agent had a huge amount of data to learn from, it cannot learn well if it does not have sufficient computational capacities. Furthermore, even if the agent had a good world model, it may not have enough computational capacities to use the model properly when choosing actions, especially when planning action sequences. Likewise, perception is limited by deficiencies in the sensory input and the ensuing inverse problem. These are some of the problems that an AI, a human, an animal, and in fact any sophisticated cognitive system will encounter.

The brain, having a particularly sophisticated cognitive architecture, uses some clever tricks to try to cope with some of these problems. Wandering thoughts speed up learning by running learning algorithms in the background; however, they make us experience simulated suffering in addition to the real one. Emotional interrupts are useful when unexpected things happen and the computational resources need to be redirected, but they can be mistuned and lead to unnecessary alerts and suffering. Highly intelligent agents may have to use parallel and distributed processing where it is no longer clear if anybody is in control; such uncontrollability of the mind itself is reflected precisely in the emotional interrupts and wandering thoughts.

It is due to these limitations—physical and cognitive—that the cognitive system will make errors in its predictions, its plans, and its actions. We saw that suffering is basically a function of the constant evaluation that an intelligent system performs regarding its actions, resulting in an error signal. Without such evaluations, the performance cannot be improved. In particular, error signalling is necessary for the system to learn and update its model of the world. Yet, the constant monitoring and signalling for errors creates constant suffering. In animals and humans, we also find processes related to self-preservation and self-evaluation, which create another layer of suffering. This is what leads to the simple maxim in the title, intelligence is painful.

Does intelligence necessarily lead to suffering?

It could thus be argued that suffering is the price to pay for intelligence: without some kind of error monitoring, learning is not possible. It is common sense that errors due to past decisions have to be detected to learn to make wiser decisions in the future. Error signals might not be needed if the agent were programmed to be sufficiently intelligent to begin with, so that it would not need any kind of learning, but current AI research suggests that intelligence without learning is very difficult to achieve.

Yet, one might ask if the price is too high, whether intelligence is worth the suffering.2 Would you prefer to be a bit dumber if that reduced your suffering? Suppose a drug were developed which abolishes any error-signalling in humans; perhaps that is possible by interfering with the dopamine metabolism. Suppose that as a logical side effect, it prevents you from learning new reward associations. Would it be worth taking? Actually, we don’t even need to consider such an extreme case where all error signals are removed. How about just taking a small dose of that drug, so that error-signalling is reduced to some extent? You would suffer less but perhaps learn new things a bit more slowly. What would be the right balance between maximizing performance and reducing suffering?

I argued earlier that many desires are actually not good for us, and should be seen as evolutionary obsessions. Some desires are insatiable, so trying to learn to satiate them is a fool’s errand. And, perhaps most importantly, a large proportion of desires are just impossible to satisfy due to the uncontrollability of the world. Clearly, frustration in those cases should be avoided altogether; they present no real trade-off between suffering and intelligence. If you really want to be frustrated, better do it in cases where the desires actually serve a useful purpose, and you learn to act more efficiently in a meaningful context.

On the other hand, even if we admit that a certain amount of suffering is necessary as a trade-off to achieve intelligence, is it really necessary that such error-signalling should be consciously experienced as suffering? Even the most rudimentary AI computes errors while hardly being conscious. We would not say that a thermostat, arguably the simplest possible system with some intelligence, is suffering or feels pain when it realizes the temperature of the room is not what it is supposed to be. This leads to another thought experiment: How about a drug that does not reduce error-signalling, but prevents it from reaching our conscious perceptions—would you not take it? In fact, this need not be just a thought experiment. Moving to the level of meta-awareness, as described in Chapter 15, seems to reduce the felt impact of all suffering, a bit like such a drug.

Consciousness is a great mystery. It cannot be entirely avoided in any discussion on intelligence or suffering, but unfortunately, there is very little we can say with any certainty. One thing which is clear, though, is that the way human consciousness usually operates is not very nice from the viewpoint of suffering. A large amount of suffering is even created out of nowhere by conscious simulation.

From intelligence to wisdom

Nevertheless, intelligence may not necessarily be only a bad thing from the viewpoint of suffering. Intelligence may lead to reduction of suffering once it reaches a certain stage, while being embedded in a culture that actively questions where suffering comes from and what can be done. Buddhist philosophy, together with the Stoics and other related systems, proposes that we should adopt certain ways of thinking which counteract, and to some extent neutralize, the causes of suffering. For example, we should give up any attempts to control and accept that things are just happening; we should recognize that actually we don’t know much and are always making decisions under uncertainty; we should give up the meaningless and even destructive desires programmed by evolution. Ultimately, we should recognize the true nature of our consciousness, that we are operating in a kind of virtual reality, which bears only some indirect relation to the actual reality.

Such proposals are quite radical, and have been recognized as such for centuries. This is not surprising since reducing desires and giving up control are strictly against our evolutionary programming. However, it may not be necessary to follow these ideas to any extreme extent: Buddhist philosophy in itself proposes the “middle way”, the idea being that going to any extremes is, in the end, counterproductive. Instead of giving up all control, for example, we might just give up some of the control, preferably on those things where claiming control is most clearly conducive to suffering.

Most philosophical systems that discourage acting out our desires do recognize that a human being needs to take some actions; they do not recommend complete inactivity as some might assume. Stoicism as well as Taoism emphasize acting “naturally” (or according to one’s nature), which I would interpret in terms of the habit-based, automated action selection: learned associations between the current state and actions may still remain even if no reward is expected or predicted anymore.3 In early Buddhist thought, motivation for mental development is often seen to be a desire of a special kind that should not be eradicated, thus providing another motivation not based on reward maximization.4 Some parts of Hindu philosophy suggest performing one’s duty without any concern for reward,5 while later Buddhist philosophy6 emphasizes altruistic action as the ultimate motivation for fully enlightened beings.

Indeed, this book has almost completely neglected the social aspects of being human—perhaps because AI’s are not very social at the moment, and relevant computational theory is scarce. The theory of this book is clearly applicable to the social domain in the sense that social interaction creates its own input data that the agent can learn from. The agent might then realize that other agents are often quite unpredictable and uncontrollable, and that this leads to a lot of frustration. Yet, social interaction creates completely new phenomena which are outside of the theory of this book, but should to be considered to see the whole picture of human suffering.

It can be argued that social interaction is essential for understanding what it is like to be human.7 One aspect is that human philosophical systems considered in this book are products of a long cultural evolution. It is difficult to see how any single AI could conclude, by itself, that desire produces suffering (or errors) and should be reduced. It is probably impossible for even any single human being to discover anything like those aforementioned philosophical systems. What is necessary is a cultural learning process based on sharing information between individuals, eventually leading to accumulation of knowledge over many generations.8 Such culturally produced, higher kind of intelligence, which can even consider the very concepts of intelligence and suffering as the objects of its analysis, is close to what would better be called wisdom. It is something much deeper than intelligence, and presumably unique to humans.

From individual desires to altruism

Another essential aspect of social interaction is the human capacity for compassion, love, and similar social emotions. In classic Buddhist training, there is a group of practices based on cultivation of positive social, interpersonal emotions such as compassion and “loving-kindness”.9 Interestingly, such emotions can even be directed towards oneself: As an important example, self-compassion, i.e. compassion directed towards oneself, may strongly reduce negative self-evaluations, and thus self-related suffering.10 Another book could possibly be written where reduction of suffering is approached from the viewpoint of such positive social emotions. Unfortunately, any related computational theory is rather lacking at this moment.

Historically, within Buddhism, a self-centered approach to reducing suffering was increasingly criticized in the centuries after the Buddha’s death. Consequently, the later Mahayana schools adopted unselfish behaviour as the ultimate ideal, instead of your individual nirvāna. They proposed that it is better to sacrifice one’s own bliss and meditation time, at least to some extent, in order to help others to reduce their suffering. Slightly paradoxically, such a prosocial attitude is then seen as leading to an even higher form of happiness. I would assume that such enlightened altruistic action somehow avoids the frustration process, perhaps because there is no longer any consideration for rewards that the agent itself will get, so in a sense, the self-based desire is no longer operating. It also seems that altruistic action gives its own evolutionary rewards,11 and can even provide meaning to one’s existence.12 Thus, altruistic action, if performed with the proper attitude, may be the ultimate exercise to reduce suffering—even to the very person performing the action. To conclude, let me quote the Mahayana Buddhist philosopher Śāntideva who recapitulates this brilliantly:13

All those who suffer in the world do so because of their desire for their own happiness.
All those happy in the world are so because of their desire for the happiness of others.