Chapter 8
Emotions and desires as interrupts

Part II of this book is about better understanding why there is suffering and what increases it. Since we saw earlier that suffering can fundamentally be seen as frustration, the question is what factors increase frustration. Part II analyses frustration in terms of uncontrollability and uncertainty (which is related to unpredictability). These properties make errors in action selection likely, and thus lead to frustration. Even the mind itself is seen as uncontrollable, since it has multiple processes operating at the same time, in particular emotions (Chapter 8) and wandering thoughts (Chapter 9). Further, perceptions are uncertain due to incomplete input data as well as a faulty prior model of the world (Chapter 10). The difficulties of communication between different brain areas or processors create a further loss of control (Chapter 11). Ultimately, we need to confront the problem of consciousness (Chapter 12) which creates a kind of a virtual reality where painful events are simulated again and again. For a sneak preview of what the system will look like in the end, the reader can have a look at Figure 13.1.

*****

In this chapter, we start this investigation by looking at the concept of emotions. Anybody pressed to give sources of suffering would probably give a list of such phenomena as fear, disgust, sadness, and perhaps anger. Those are actually some of the most typical emotions in the terminology of neuroscience and psychology. If we are to understand suffering, we have to understand how such emotions are related to it. In Chapter 6, we already saw how fear is an essential part of self-related suffering, being due to threats to the self. Yet, fear is more than that: When assailed by fear, you forget everything else you were doing, you focus your attention exclusively on whatever caused your fear, you try to figure out how to get rid of it, and, eventually, run fast. These are examples of the aspects of emotions we investigate in this chapter.

We discuss how emotions can be seen as information processing and signalling, focusing on fear as a prime example. The main focus here is how emotions capture attention and interrupt ongoing processing. Another aspect is that emotions trigger basic, pre-programmed behavioural sequences, such as running away. Importantly, emotions are something that reduces any control we have on our minds and bodies, which is one leitmotiv in this part of the book. We also see that desires have similar interrupting qualities.

Computation is one aspect of emotions

Some readers may wonder what emotions have to do with artificial intelligence. Surely, we can program an AI or a robot to function using purely “rational” procedures: Maximize expected reward and act accordingly, within the limits of the information the AI has, and as far it is computationally possible. Why would we need to introduce anything “emotional” in the system?

First, we need to understand what the word “emotion” means. Unfortunately, very different definitions are used, and there is no generally agreed definition even in the limited context of neuroscience and psychology.

Emotions have many components

The most comprehensive definitions define an emotion as a complex of several different components. For example, if you feel fear, you will have a particular facial expression, you may scream, and your body will undergo physiological responses such as increased heartbeat. Next, your cognitive (i.e. computational) apparatus will start planning how to escape from the situation, and indeed, pre-programmed behavioural routines such as fleeing may be activated. While all this is happening, you will also feel afraid, in the strict sense that you have the conscious experience of being afraid.

As with almost any phenomenon in neuroscience and psychology, some emphasize the behavioural aspect of emotions, while others concentrate on more internal phenomena, including information-processing—usually called cognition in this context. Emotions are further characterized by a feeling tone: often negative (as in fear) but sometimes positive (as in joy). The feeling tone, technically called “valence”, is seen as the core of emotions by some, providing motivation for action. Yet others think that what defines an emotion is the conscious, subjective experience, such as feeling afraid.

In this book, I take an approach where all the aforementioned components together constitute an emotion.1 Nevertheless, I focus on the computational, information-processing aspect of emotions, in line with the general approach of this book. Such information processing is often reflected in behaviour, and at least in humans, often leads to a subjective conscious experience, but I don’t explicitly consider behaviour or conscious experience in this chapter. The key question here is, how is information processed in what we call emotions; what is special in that information-processing when we feel, for example, fear or disgust?

Emotions help when computation and information are limited

The starting point here is that emotions are needed because of the limited information available and the limited computational capacity. If an agent knew exactly everything that happens in the world and had unlimited computational power, perhaps it would not need emotions. A planning system would decide the best course of action—and it would really be the best course of action. However, in reality, things happen that we didn’t expect. It is because the agent does not know everything about the world (limited information), and the planning system cannot compute all the possible courses of action (limited computation). This is of course a narrative running through all AI and all neuroscience, but it is worth repeating.

This chapter will show various ways in which emotions help in information-processing under such limitations. One thing that the limitations above imply is that some kind of monitoring of unexpected events is needed, as well as a system for changing plans as they are detected. This is the role of interrupts, which is one of the themes of this chapter, and one of the specific functions of emotions. Such interrupts often also trigger pre-programmed action sequences or plans that have been found useful by evolution, or the programmer, which is another aspect of how emotions help in steering an agent’s behaviour.

Emotions interrupt ongoing processing

Suppose you (or a robot) are walking home on a street you know. While walking, you may be planning what you will be eating tonight (the robot might be just concentrating on the walking because that’s difficult enough for it). Now, a car suddenly appears and comes fast in your direction. What you need to do to survive is two things. First, your perceptual system has to detect that something unexpected and potentially dangerous is happening. Second, the fact that something potentially dangerous is happening must be broadcast to the whole system; you have to stop thinking about what you will eat, and you have to stop following the route back home. Thus, you interrupt all ongoing activities, including your current train of thought. Instead, you have to use all your cognitive resources to figure out what to do, how to jump to safety and when.

The important new twist here is that once the sensory systems realize something suspicious is happening — even if they don’t exactly know what — they have to send a kind of an alarm signal to other parts of the brain. In particular, the system responsible for executing action plans must be interrupted; in computer science, such a signal from one process to another is typically called an interrupt. These functionalities go much beyond the mere “cool” perception that a car is visible and coming in your direction.

A separate alerting mechanism with the capacity to stop ongoing activities and reorient computation is the core of the interrupt theory of emotions originally proposed by Herbert Simon in the 1960s.2 The key idea in this theory is that being an interrupt is what distinguishes an emotion from ordinary information-processing. The interrupt theory explains why emotions have particularly powerful attention-grabbing properties; that is the whole point of emotions according to this theory.3 Such an interrupt system is particularly important since earlier in Chapter 3 we argued, following the belief-desire-intention theory, that an agent needs to commit to a single plan instead of jumping from one plan to another. Commitment is useful, but it should not be blind: interrupting a plan must be possible.4

Pain, disgust, and fear

At the most elementary level of interrupts, we actually find simple physical pain. Although we don’t categorize it as emotion, pain is clearly a signal or a process that has such an interrupting quality. It is broadcast to the whole information-processing system; all ongoing behaviours are typically suppressed, and the organism uses most of its resources to get rid of the cause of the pain. Pain is, in fact, the most fundamental, as well as the strongest kind of interrupt. It has to be so, because it is the signal which is the most relevant for the intactness and even the very survival of the organism. The alert is about a physical, chemical, or biological danger to the organism—tissue damage in the terminology of Chapter 2—that typically comes from outside.5 It requires urgent action, such as withdrawal away from the object that caused the pain. Reflexes like this are present even in very simple organisms, and should be programmed even in reasonably simple robots. You don’t want an expensive robot to break down the very first day because it doesn’t understand what kind of actions are dangerous to itself.

Disgust is conventionally classified as an emotion, although it is closely related to pain. Disgust is triggered by perception of substances which are likely to be toxic or transmit diseases. Again, current processing is interrupted to direct attention to that substance and how to avoid it. Disgust is often a very primitive emotion: for example, disgust at the smell of rotten food is very close to physical pain. This is natural since disgust is about protecting the organism from something not very different from tissue damage. However, disgust has also more abstract forms in the case of disgust at morally condemnable behaviours.6

More complex organisms are able to predict impending danger at a much greater distance and temporal delay, as discussed at the end of Chapter 6. While disgust, and even pain, already have such a predictive quality in primitive form, complex organisms can predict risk of damage before the pain or disgust systems are activated. The signal related to such anticipated danger is fear, which interrupts ongoing activity and directs processing to avoidance of the dangerous object. This interrupting viewpoint on fear is different from our discussion of fear in Chapter 6, where we linked fear directly to suffering through the concept of self. Thus we see explictly how emotions have many components or aspects even regarding computation.

Desire as an emotion and interrupt

Interrupts can also be useful when there is no danger visible, but rather an opportunity to obtain some kind of reward. Casual observation tells us that something very similar to an interrupt happens when you see an object that you really like and want. You are assailed by an acute, “burning” form of desire. While in neuroscience and psychology desire is usually not considered an emotion, there has always been some doubt on whether such a distinction is justified. Acute, burning desire actually squarely sits in the domain of emotions as far as the interrupt theory is concerned.7

In chapter 3, we first defined the desire system as something that suggests goals to the planning system. But we didn’t go into details on how the desire system actually works: How can it identify states which are easy to reach while having a high state-value? I think the whole point in the computations related to desire is that they happen as a dual process. When desires suggest goals for a planning system, they have to do it based on fast neural network computations in order to usefully complement planning. As we saw in Chapter 7, neural networks, such as those in AlphaGo, can be trained to output approximate solutions to the computations of state-values and similar quantities needed for planning. It is likely that the computations underlying desire are based on such neural networks, which suggest candidate states that are likely to be easily accessible while having a high state-value.

Elaborated-intrusion theory

A psychological theory that is very compatible with such goal-suggestions by neural networks, and combines it with an interrupting quality, is the elaborated intrusion theory of desires. As its name implies, it considers desire as a computational process that intrudes your mind: it invades your information-processing system so that you lose control, at least initially. You are not able to think about anything else and keep planning courses of action regarding that object of your desire. Such ensuing compulsive planning is the elaboration part of desire.8

Everybody has experienced such intrusions. You see a sexually attractive person, and you cannot think about anything else for a while. Or, you see your very favourite brand of chocolate in a supermarket, and you can hardly resist taking it in your hand and putting it into your shopping basket. You may be devising all kinds of sophisticated plans to get the object of your desire, forgetting completely what you were actually supposed to be doing. Thus, at least in humans, the simple neural networks computing desire can be in conflict with deliberative planning processes. This emphasizes that desire can take control of the mind, inexorably turning our attention towards the object of the desire. With such really “hot” desire, which could be called “irrational” and strongly affective, there can be a conflict between “reason and passion”—which is perhaps a poetic expression for the dual-process character of the information-processing system.9

Valence

Such a dual-process approach above brings us close to another interesting concept: valence. In psychology, valence is a technical term describing the intrinsic positive-negative, pleasure-displeasure, or good-bad axis of states or objects. From the viewpoint of subjective human experience, valence means whether feelings are positive or negative: positive valence is associated with pleasure, negative valence with displeasure. Valence is closely related to liking: we could equate liking and valence, saying that we like things which have a positive valence and dislike things which have a negative valence. Alternatively, valence can be defined based on behaviour: humans as well as animals approach and try to obtain states which have positive valence, and avoid things and states with negative valence.10 Desire is thus usually directed towards states that have positive valence.11

In our framework, valence is closely related to the quick evaluation of any state or object by the neural network that computes approximations of state-values. (I shall not attempt to give an exact definition of valence or liking here since it is not particularly important in what follows.) When you see chocolate, its high valence is reflected in your neural networks predicting high state-value if you reach the state of eating it. Thus, valence computations are necessary for interrupts based on desire. In Chapter 13 we shall discuss how the sequence valence-desire-intention is important in Buddhist philosophy: just like in the present discussion, it is valence that leads to desire, and further to intentions and frustration. In that sense, the valence computations are the very root of suffering.

Emotions include hard-wired action sequences

One reason for having interrupts is that they often launch “hard-wired” programs, or sequences of actions for specific situations. Many emotions are characterized by their specific, relatively rigid programs.12 The action sequence is, in fact, the aspect that most visibly distinguishes which emotion is taking place. In the case of fear, the typical action is to choose either freezing or fleeing. Disgust leads to immediate rejection and avoidance of the substance triggering the emotion. In animals, such programs are evolutionarily quite old: humans have largely the same action programs as dogs.13

The point is that some simple action sequences are particularly useful and universal, so it is a good idea to have them readily stored in the system so they can be executed quickly, without any need for elaboration. This is in stark contrast to the main processing being interrupted, which is often a result of long elaboration. In fact, plans may take quite a while to formulate, which makes planning less useful in an emergency situation. Furthermore, such emotion-specific action sequences may be very difficult to learn. For example, anything related to self-preservation is difficult to learn by reinforcement learning, since when the agent realizes that the current situation is lethal, it is too late. Therefore, it is important to have them readily programmed in the system—meaning genetically transmitted in humans.

Anger is another fundamental example of an emotion which clearly has its own hard-wired action sequence. It also has a particularly strong social quality: real anger in the sense of interrupt is usually associated with other people. While you might say that you are angry about bad weather, that is not much more than ordinary frustration. We shall not consider anger in any detail here because such social aspects are completely beyond the scope of this book and would require more complicated theory, in particular game theory. Let me just mention the basic idea, which is that anger is a special hard-wired action sequence that protects the agent from attacks by creating a credible threat of a robust retaliation that would inevitably be triggered the case of being attacked.14

It is now useful to contrast emotions to habits, in the wide sense used in Chapter 5. Habits are often triggered by some environmental stimuli—a bit like interrupts—and lead to a fixed kind of behaviour—a bit like the rigid action sequences we just mentioned. In these two ways, habits have some similarities to emotions. However, habits are not really interrupts. Perhaps when you walk on the street you have the habit of humming a tune to yourself. However, it rarely happens that you stop whatever you’re doing because you suddenly feel an irresistible urge to start humming. Habits don’t have the power to capture your attention and interrupt current plans.

How interrupts increase suffering

We have seen that a number of phenomena, which are often considered separate in psychology and neuroscience, share the important characteristic of being interrupts. Pain, emotions, and desire can all be seen in this computational framework.15 But many emotions include a lot of suffering. If such emotions are essentially just interrupts, why would there be so much suffering involved?

I propose there is a reason why interrupts create suffering directly, by themselves: the interrupting system uses the pain signalling system. In fact, most emotions discussed here are negative, they hurt, and this suggests they must use the pain system, like suffering (“mental pain”) in general.16 Making the body feel pain is an evolutionarily primitive way of grabbing the attention of the whole cognitive system, as was discussed in Chapter 2. Interrupts need, by their very definition, to achieve such an attention-grabbing effect, so using the pain system is even more natural than in the case of, say, frustration. In fact, the signal that triggers an interrupt can be interpreted as a special kind of error signal, and thus it fits in our general framework of suffering based on error signalling.17 (Positive emotions are a rather different story, and not considered here.18)

Another important problem with interrupts is that they reduce control, which is one of the main themes in the following chapters. A crucial part of the interrupt theory is the idea that the interrupts are automatic and largely irresistible. For example, many people would be so much happier if they could just consciously decide to switch off their fear system. But the point is that interrupts are outside of conscious control: They have to be so, because very often they need to interrupt conscious thinking and consciously controlled action. If you could somehow weaken interrupts so that they don’t disturb you, they would be useless: it would be like switching off a fire alarm system because it is too loud. In a scary situation, fear will appear together with its inherent suffering, no matter how much you try to control it. We saw already in Chapter 7 how the dual-system structure of the brain means that the fast, unconscious fear system usually prevails. (We will return to the question of conscious control, or lack thereof, in Chapters 9 and 11.) The same happens with desires: The fast computations of valence and values by neural networks will “intrude” and interrupt other processing, directing all the processing towards the object of the desire. Such interrupts are even more annoying if they interrupt activity that would have created pleasure, for example when you are in a “flow”, fully engaged in a rewarding and meaningful activity. In this sense, interrupts greatly increase suffering, by increasing desires, aversion, planning, and frustration.

Such reduction of control might not be a bad thing if the interrupts were somehow optimally tuned to reduce suffering. However, another problem with the interrupt system is that its design parameters are often questionable from the viewpoint of suffering. To begin with, the system that triggers interrupts does not care about our subjective suffering, only about our evolutionary fitness. Evolution makes us consider harmless things as dangerous, worth triggering an interrupt, if they are threats to our evolutionary success. Sexual jealousy, and the ensuing rage, is one example, where (from a male perspective) the evolutionary “danger” is that one might end up raising a child who is not one’s own and does not spread one’s genes. Yet, that is hardly a problem from a contemporary viewpoint: it is in fact very common in modern families.

What’s more, the system may not actually be very good at maximizing evolutionary fitness either. As we saw earlier, evolutionarily developed neural mechanisms may not be well adjusted to our current society, since they may come from the legendary “African savannah”. In the case of fear, for example, we tend to be afraid of snakes or spiders, but not so much of cars, although cars are much more dangerous at least in modern cities.

Another problem with the interrupt system is that if the interrupts are excessive and disrupt the normal function of the system too much, it may simply worsen the situation by making it more difficult to respond to the situation. Such problems are related to the fact that emotions and desires are short-sighted—as has been acknowledged by philosophers since antiquity—and may interrupt useful plans in a way that produces frustration because the interrupts fail to understand the long-term utility of following the plan. For example, an important function of pain is to attract the attention of the agent to the source of the pain, but if the person can think of nothing else than the pain, as often happens in the case of overwhelming fear or depression, he will not be able to find a solution to the situation. Or, if you are easily scared and are constantly interrupted by, say, harmless bugs, your performance in a meaningful pursuit may be hampered even though there was never any real danger to avoid.

Alarm systems cannot be universally optimal

These questions are related to the general theory of designing alarm systems, which is considered in the mathematical theory called signal detection theory.19 It is based on maximizing the expected payoffs, where payoffs are similar to rewards, describing how good (positive) or bad (negative) the results of a given action are. For an alarm system such as interrupts, there are two possible actions: trigger an alarm, or do not. The theory is related to the AI theory outlined in previous chapters, but with a different emphasis. An important lesson in this theory is that there is no such thing as a universally optimal alarm system. That is because the payoffs are different for different people, and different in different contexts, and may change over time.

Consider designing an alarm system yourself, in the form of a burglar alarm. You might start the design of a burglar alarm system by assigning a high payoff to detecting burglars—which sounds reasonable and innocuous. However, this means the system will not mind making false alarms, since you only give a strong payoff (reward) for detecting burglars, but you do not give any punishment for false alarms. To maximize reward, the system rationally decides to trigger an alarm if there is any hint of a burglar present. Eventually, your burglar alarm will constantly wake up everybody in the middle of the night. Realizing your mistake, you next give a really high reward for not giving false alarms. The result is that the system never gives an alarm because that’s the perfect way to avoid false alarms, which are now strongly punished. In this case, the alarm system ends up being completely useless since it does not do anything. It is very difficult to say what the right compromise is: the alarm system should be sensitive but not too sensitive, and the right parameters are quite subjective and depend on the context. Evolution has programmed certain sensitivity levels in our interrupt system, but in light of this signal detection theory, it is not actually clear how optimal they were even for all our ancestors on the African savannah, let alone for modern city-dwellers.20

Emotions are boundedly rational

Often emotions are contrasted with rationality and “cool-headed” decision-making; it is typically assumed that the best decisions are made when emotions are not at play. However, the viewpoints on emotions explained in this chapter show that emotions contribute to optimal decision-making and action selection. Emotions are useful from a rational viewpoint as soon as there are certain information-processing constraints; for example, if the planning system does not have time to consider all possible paths in the search tree. This is certainly true in any sufficiently complex AI system or animal. The viewpoint which considers emotions as necessarily irrational is in fact largely rejected in modern research.21

I have casually used the word “rational” here as well as in earlier chapters, but we need to think a bit more about what it actually means. Often, a decision is called rational if it is optimal in maximizing reward or a similar quantity given the information available to the agent. In other words, the decision of the agent (such as choosing an action) is the same as that made by an ideal, hypothetical agent with perfect information-processing capacities and the same information about the world as the agent in question. So, even a perfectly rational agent is not expected, in this definition, to make the very best possible decision, but the best possible given the limited information it has at its disposal. However, in reality, the information-processing power of the agent is limited as well, as we have indeed seen in many chapters of this book.22 The case where information-processing power is limited as well leads to the concept of bounded rationality, also called computational rationality. It refers to decisions which are optimal given limitations in both the information and the computation available to the agent.23

Emotions, seen as interrupts or as automated action sequences, can largely be considered to strive towards bounded rationality. In both cases, emotions are information-processing routines or shortcuts which help in achieving as good outcomes as possible, given the computational restrictions and the limited information available. It is in this precise sense that we can say that emotions help in rational decision-making, and it is not justified to oppose rationality and emotions.24

Yet, emotions also have qualities that are in contrast to our everyday notion of rationality. In particular, they are not under conscious control. In this sense, they are similar to the neural networks in dual-process theories, and indeed we saw that connection above in the case of desire. The question of control is actually crucial from the viewpoint of suffering, as we will see many times in the following chapters.