Chapter 12
Consciousness as the ultimate illusion

Why do we have conscious experiences? This is one of the deepest unanswered questions in modern science. It is not even quite clear what the whole question means and how it could be formulated in a rigorous, scientific manner. One thing that is clear, however, is that consciousness is somehow related to suffering. Some would even claim that in a strict sense, there can be no suffering without consciousness.

In this chapter, I try to shed some light on the nature and possible functions of consciousness. I consider two different aspects of consciousness: it can be seen as performing some particular forms of information-processing, or it can be seen as a subjective experience. I provide a critical review of the main theories concerning these two aspects. In particular, I explain how consciousness is related to mental simulation and the self, and as we have seen, those play an important role in suffering. This leads to some old, but still radical, philosophical ideas about the nature of our knowledge of the world and how it relates to our consciousness. Ultimately, I argue how changing your attitude to consciousness may actually have a strong influence on your suffering, a theme that will be further elaborated in later chapters.

Information processing vs. subjective experience

The main problem we immediately encounter in research on consciousness is the difficulty in defining the terms involved. “Consciousness” has different meanings to different people and in different contexts.1 For our purposes, we can divide the concept of consciousness into two aspects. First, there is the information processing performed by human consciousness. This is something we might understand based on AI, since information processing can usually be programmed in computers. One approach is to ask what the computational function, or utility, of consciousness might be in humans; these are relatively well-defined scientific concepts and questions. This approach is fine as long as we are content to consider consciousness as another form of information-processing, or computation. The second, more difficult aspect of consciousness is the experience. That is, the conscious “feeling” which is specific to myself, i.e., subjective. Its existence is so obvious that it is rather neglected by most people.

When you look at the text in this book, several quite amazing things are happening; they can be roughly divided to information processing and experience. Those related to information-processing have been discussed earlier in this book. Light enters your eye, generates electrical signals on the retina, the signals travel into your brain, and some incredibly intricate information processing takes place, allowing you to recognize the letters and even transform the letters into words. However, all that is simply information processing, and it can soon be programmed in a computer—in some more rudimentary form, it is possible even now.

But in addition to such information-processing, there is something else: you have a conscious, subjective experience of the book, the letters, and the words. Somehow, almost magically, the book appears in some kind of a virtual reality created by your brain. We tend to think that this is normal since the book is there, and we simply “see the book”. But in fact, the conscious experience is not somehow in the book, and it does not somehow automatically come out of the book. The experience, the awareness of the book is created by some further mechanisms which we simply don’t understand yet. This experiential aspect is called “phenomenal” consciousness. Philosophers use the word “qualia” in this context: the conscious “quality” of the book being seen, “what it is like” when the book is consciously experienced. Or, as more poetic narratives would have it, it is the “redness of a rose”. It is not information processing but something more mysterious.

It is this phenomenon of subjective experience, or qualia, which is the main topic of this chapter. It is also the main meaning in which I use the word “consciousness” in this chapter; “awareness” is used in exactly the same meaning, and so is “conscious experience”.

The computational function of human consciousness

Now, what is the connection between these two phenomena: information-processing and consciousness? Conscious experience is certainly not just one form of information-processing, but the connection is extremely difficult to understand. In fact, consciousness must have some connection with information-processing: the qualia of the rose must be based on processing of incoming sensory input, even if most of that sensory processing seems to be unconscious. Let us assume, in the following, that part of the information-processing in the human brain is conscious, in some sense to be elucidated. This is such a typical assumption that it is often not even made explicit.

Let us then try to understand what we can say about the function, or utility, of such conscious information-processing. Taking a more neuroscientific approach to the question, one can first ask: What are the evolutionary and computational reasons why certain animals, such as humans, have consciousness? We assume here that consciousness is a faculty that is a product of evolution—but strongly influenced by culture, of course. It is quite difficult here to ignore the experiential part of consciousness and consider information-processing only. If we say that an animal, or an AI, is conscious, it seems to almost necessarily mean a conscious experience: we wouldn’t even know what it means to say that an animal is conscious if it does not have conscious experience. So, in a sense, the question is necessarily about the computational function of human conscious experience, and whether it can be explained by evolutionary arguments.2 I will next review a number of proposals.

Investigating wandering thoughts actually lead us close to consciousness, because “thinking” is often considered the hallmark of consciousness. More precisely, the fact that we can reconstruct a vivid image in our minds about past or future events, while ignoring the present sensory input, is a remarkable property that seems to be closely related to consciousness. Some investigators actually propose that one of the main functions of consciousness is such simulation, which is also called virtual reality. That is, consciousness allows us to consider different scenarios of what might happen in the future, and what would be the right things to do in those circumstances. Planning crucially needs a capacity of simulating results of future actions, and indeed in the case of wandering thoughts, we already talked about simulation. Such simulation obviously would be useful for survival and reproduction, and thus favoured by evolution. A special case of such simulation is dreaming, which does create a virtual reality that is particularly far removed from current reality. Dreaming often includes simulation of threatening situations, i.e. situations in which it is important to know what to do to avoid harmful consequences.3

However, I think we should not too easily conflate consciousness with thinking or simulation. What we see is a correlation between a certain brain function (namely, simulation) and consciousness, but it is difficult to say whether consciousness is really essential for such a function. As we saw above, modern AI uses planning, and even systems similar to wandering thoughts, simulating events from the past and simulating episodes that might happen in the future. Yet, nobody seems to claim that replay or planning would make a computer conscious. Such a claim seems absurd to most experts because claiming that a computer is conscious is usually interpreted as having phenomenal conscious experiences; but they are very unlikely to be produced by such simple computations as replay and planning.

Another possible function of consciousness is choosing actions. We typically have the feeling that we consciously decide what we are going to do, an experience of free will. You may think that you decided to read this book; perhaps you decided to read this particular sentence. But did you actually decide how you move your eyes from one word to another? What do we actually decide on a conscious level? As we saw in Chapter 11, consciousness may not have any role in the control of actions; the feeling of free will and control may be deceiving. It may very well be that actions are entirely decided by unconscious processes. After all, that seems to be the case with many animals (if we assume most of them don’t have consciousness), as well as any robots and AI that exist at the moment.4

Yet another proposal is that consciousness could be useful for social interaction and communication.5 The contents of consciousness can usually be communicated; in fact, in psychological experiments, one operational definition of the consciousness of perception is that you can report the perception verbally to the experimenter. The utility of the conscious perception, in particular, would be that this perception can be transformed into a verbal form, and communicated to others. Again, a problem is that it is perfectly possible to build AI and robots which communicate with each other without anything we would call consciousness, at least in the experiential sense.

A particularly relevant proposal for this book is that consciousness facilitates communication between different brain areas.6 While unconscious processing has a huge capacity for information-processing, it suffers from the problem that the processing is divided into different brain areas whose capacity for communication is limited—as typical of parallel distributed processing. The idea here is that consciousness is the opposite: it has very limited processing capacity, but its contents are broadcast all over the brain. Consciousness can thus be considered a “global workspace”. It could be compared to a notice board where you can put short notes (limited capacity), which will be seen by everybody in the office (global broadcasting). It is also a bit like a central executive in the society of mind discussed in Chapter 11—one which is not particularly smart but whose thundering voice is easily heard even at a great distance. This links clearly to the proposal in previous chapters, where we considered a system where pain, or errors such as reward loss, are broadcast to the whole system. An intriguing possibility is that this could be why pain, whether physical or mental, must be conscious. Perhaps pain is so acutely conscious precisely because the broadcasting system it uses is inherently related to the global workspace of consciousness. Yet, it is not clear to me why this would require conscious experience, since distributed information processing is increasingly performed in computers as well.

I have just described several proposals which each consider highly relevant information-processing principles. For example, inside an AI, the communication between different processors or processes needs to be solved, and mechanisms related to the global workspace theory can be very useful. Yet, in each of those cases, we have to ask whether would we say an AI with such capacities is conscious. Would it necessarily have subjective experience, if that is what we mean by “conscious”? We could go through all the computational functions of the preceding chapters and ask whether consciousness necessarily has any role in any of them. To the extent that all of these are simply computations that can be implemented in an AI, they may actually not need consciousness. (I’m here assuming that any computer we have at the moment is not phenomenally conscious, which is relatively uncontroversial.) Therefore, I think there is currently little reason to believe that consciousness would be necessary for some particular kinds of computation, which would be impossible without consciousness.

Consciousness as a specific hardware implementation?

However, another viewpoint is possible: there may be some forms of information processing which are correlated with consciousness. It could be that some of the computational routines in the brain are always implemented using some special circuits or processes that give rise to consciousness. Such computations would then always give rise to consciousness, even if in theory, it would be possible to implement them in non-conscious circuits. If we program that same kind of computation in an AI, we might then say that we have programmed the AI to perform “conscious” information-processing. However, it may be best to use scare quotes here: the AI may be imitating processing that is conscious in the brain, but it might not have conscious experience, so whether we should call such computations “conscious” is questionable.7

Therefore, any argument—such as I have just made— saying that a computational function cannot be the actual function of consciousness because it can easily be programmed in an AI, may be missing the point. While it may not be completely necessary to have consciousness for, say, simulation, it could still be that in biological organisms, shaped by evolution, consciousness is somehow an important part of the computational implementation of simulation, or any of the other functions above. The fact that something is easy to program in a computer, which is based on completely different kind of hardware, does not mean that it might not be very difficult to implement in the brain without the help of some, hitherto unexplained, conscious mechanisms. Thus, consciousness might be a particular “hardware” implementation of certain computations which are otherwise difficult to perform in the brain—whether simulation, global workspace, or else.

Yet, this is all mere speculation. We cannot exclude the completely diametrically opposed possibility, which is that consciousness is not actually part of the information processing at all. Perhaps it does not affect the computations, or the contents of the mind, in any way; it simply reflects the results of the information processing. It might not even have any evolutionary utility.8 Many thinkers over the centuries have proposed that consciousness in humans is only the tip of the iceberg, and most mental activities—which I call information processing—happen without consciousness.9 But here, we find an even more startling possibility: perhaps consciousness is not even the tip of the iceberg but, to push the metaphor further, a bird that flies over the iceberg, only watching it from a distance. We will see even more startling possibilities later in this chapter.

The origin of conscious experience

Next, let us consider the problem of the existence of conscious experience itself. Most scientists would agree on the fundamental importance of understanding the physical, chemical, and biological processes that enable conscious experience. While it is one of the deepest questions in science, I’m afraid there is little we can say about it with any certainty. It is not even clear if the whole question can be approached scientifically. This is because it is difficult to make any rigorous experiments on experience because it is subjective. Only I observe my conscious experience; you, or any neuroscientist, cannot really know what I experience. So, how could a neuroscientist conduct experiments on people’s experiences? Measuring brain activity, or looking at people’s behaviour are not measuring experience. Brain activity and behaviour are related to and correlated with experience, but not the same thing. The closest you can get is asking people what they experience. However, they might not be able to express it verbally with sufficient accuracy or detail. In fact, if participants in experiments answer such questions, they are ultimately engaged in behaviour (in the form of speaking), and, in a sense, the neuroscientist is actually only measuring their behaviour (speech, in this case).

With good reason, the problem of understanding how and why the brain creates conscious experience–including whether it is actually the brain that does that— is called the hard problem of consciousness research.10 However, let us not despair: Even if any solution may not be available, some interesting things can be said about the problem.

To begin with, some neuroscientists think there is something special in humans that enables consciousness, and perhaps in some other mammals such as great apes as well. What it would be, nobody really knows. The main theories are based on observing what kind of structures human brains have, and what simpler animals like cats and dogs do not have. Because the brains of cats and dogs are in many ways very similar to the human brain, the relatively small differences might be related to consciousness.

One difference between the brains of humans and “lower” animals seems to be the existence of a special class of neurons, called von Economo neurons. They have particularly many long-range connections to other neurons. Since long-range connections might be related to something like a global workspace, von Economo neurons have often been considered as a potential candidate for a mechanism generating consciousness.11 In fact, it has also been suggested that the brain basis of consciousness might be related to feedback between brain areas, as opposed to any special kind of processing inside each single area.12 It could be that the long connections of von Economo neurons make such feedback stronger, sufficiently complex, or otherwise more conducive to consciousness. Interestingly, apes have von Economo neurons as well, and so do elephants, dolphins, and even some monkeys, so based on this criterion, those animals at least should be conscious.13

Alternatively, we can use AI for studying the hard problem of consciousness. We can perform thought experiments, based on the same kind of comparison as was just done with other animals. Assuming that AI is not conscious, part of the hard problem of consciousness is then to explain what creates this fundamental difference between humans and AI.

How can we know something is conscious

However, there is a problem with the argumentation above: It is based on finding animal species, such as cats and dogs, which are reasonably intelligent but have no conscious experience. Or, if we consider AI, it is assuming that AI is not conscious. But how can we even know if an animal species or AI is conscious or not?

Some would claim that we cannot even know if other people are conscious. We do tend to assume that every human we meet is conscious, but this is just a guess, really, without much logical basis. We are actually generalizing based on ourselves: the only human I know for sure to be conscious is myself. Others just move around and say things, but they could be some kind of robots for all I know; perhaps I am the only person conscious in the world. If I assume all other humans are conscious as well, I can only hope I’m not overgeneralizing! This is, somewhat cheekily, called the zombie problem: it could very well be that some of the people you meet are “zombies”, that is, creatures that look like humans and behave like humans, but do not possess any kind of consciousness.

Leaving such wild speculation aside, we do have a real scientific problem here. In neuroscience, it has been found extremely difficult to determine which animal species are conscious and which are not.14 Even considering humans, it is not easy to tell whether people in coma are conscious. For coma patients who are incapable of saying anything or any motor responses whatsoever, measuring brain activity provides the last resort for assessing their consciousness. Surprisingly, it has been found that patients who were thought to be in a completely unconscious, vegetative state are sometimes perfectly conscious, in the sense of being able to respond to questions like healthy humans would.15 Such people can sometimes learn to communicate with the external world by special devices which transform brain activity to text. So, it is actually true that we cannot always tell if even other humans are conscious!

Likewise, how could we then judge whether an AI is conscious or not? What if current AI is conscious, or will become conscious in the near future? You can find people arguing strongly for the possibility of conscious AI. Some say it is simply a question of complexity: when AI becomes complex enough, it will become conscious; the only reason why present computers are not conscious is that they are too simple in terms of their computation, in particular lacking sufficient interaction and information interchange between different processing units. Others think an AI must have a body, i.e. it must be a robot, in order to be conscious, and consciousness is somehow created in the interaction with the world.16

Fundamentally, the question of determining consciousness seems to be unsolvable because of its subjective nature: I can only know something about my own consciousness. We cannot know for sure if any animal or AI is conscious or not. Consciousness—at least regarding its experiential quality—remains a huge mystery.17

Why is simulated suffering conscious?

Let’s get back to the question of suffering. Consciousness is in some sense crucial to suffering: if we were not conscious of our suffering, if we didn’t have the conscious experience of suffering, it would not be the same kind of suffering at all. Suppose you have a headache but you start watching a really fascinating movie; you may cease to notice the pain at all. Somewhere in your brain there is probably some kind of activity which would usually lead to the experience of pain, but your attention is in the movie, so you completely ignore the pain. That is because when you are not paying attention to something, you cannot be conscious of it either.18 So, in some sense, the whole problem of suffering revolves around the question of consciousness. If we consider a simple animal or an AI and agree it is not conscious, is it actually meaningful to say it suffers—as I may have done in this book?19

The other day I was watching a fictional TV series in which a tiger attacked a woman. I felt scared. Was there any point in being scared? I was in my own home, just watching an electronic device produce some patterns of light on its screen. There was no real tiger near-by, no real risk of being eaten. Even if I had been in the middle of the action, it would have been on a film set. The tiger was tame; or perhaps it was just a computer animation, and there was no real tiger at all. In any case, even if I had been at the studio instead of home, I would not have been in any kind of physical danger. What is even more interesting is that after having watched that on TV, my brain started replaying the events. Several times during that evening, I saw the tiger in my wandering thoughts. Every time some element of fear crept into my mind. I thought: How stupid can my brain be? Why do I feel fear, although there is no real tiger near-by, there was never any real danger of anybody being eaten by a tiger, and finally, I haven’t even seen the image of a tiger for hours, it’s just repeating in my head.

This is yet another amazing thing about conscious simulation: It reproduces the same valences, that is, the positive and negative feeling tones, and the same experience of suffering, as the real thing. When I think about something unpleasant, it hurts. Maybe not quite as much as the real thing, but still it hurts. The theories explained in previous chapters actually explain, to some extent, why the brain does that. It is not stupid to replay experiences. Replay and wandering thoughts are important for learning a good model of what the world is like and what kind of actions are useful in which situations, as we saw in Chapter 9.

Yet, my current accusation of my brain being stupid is on a different level than the theories of the previous chapters. Here, I’m talking about consciousness. Why am I consciously afraid of the tiger, and consciously suffering during the replay? Why do I need to experience suffering while the brain is performing such simple computations that we can easily program in a computer? To put that more precisely, why do I need to experience a negative valence on a conscious level while doing the replay? Couldn’t the brain just do the replay somehow quietly on an unconscious level without disturbing my conscious feelings and conscious thinking? So, I’m not just repeating the question of Chapter 9, which was: why do wandering thoughts trigger feelings of pain and pleasure. Here, I ask a more general question about consciousness and suffering: Why are such simulations, and the ensuing suffering, conscious?

Again, we might assume that perhaps evolution just made a simple computational shortcut. If something dangerous is perceived in the outer world, the fear system has to be activated on every level, unconscious as well as conscious. In particular, one possibility is that conscious fear is important in the information-processing because of its capability for broadcasting as in the global workspace hypothesis. Therefore, conscious fear has to be activated to properly compute things.

There is another possibility. Above, we saw that consciousness might not be necessary for any computations, which would invalidate the argument just given. Now, assuming conscious fear has no computational utility, it could still be the case that it would be too much trouble to somehow switch consciousness off when doing replay. It would be nice indeed if, when something dangerous comes up in a wandering thought, the fear system would be activated only partly, not on the conscious level, perhaps only in some distant corner of the unconscious processing systems. This would be nice, but would evolution have any reason to do us such a favour? We should recall again that evolution does not care at all about whether we feel good or bad. It tries to optimize computation in order to maximize the spreading of the genes, and this has to be done with limited computational resources. Allowing us to switch off conscious suffering when engaging in replay would presumably be pointless from the viewpoint of optimizing computation. So, evolution just makes us suffer from replay since that is optimal use of finite computational resources. Such optimization of computation may actually increase our chances of survival a bit, and give us a longer life. Full of suffering, though.

Self vs. consciousness

So far, I have been considering consciousness on the sensory level, as in “consciousness of the text you are seeing”. Another very different thing that we can be conscious of is our own self. It can even be argued that if there is any consciousness at all, there must necessarily be self-consciousness, or self-awareness, that is, conscious experience related to oneself. It can be seen as a particularly automatic and primitive form of consciousness. So, we find yet another meaning for the term “self”—in addition to those in Chapters 6 and 11— defined as precisely this self-awareness. This corresponds very well with our intuition, where it is my conscious feeling of being “me” that defines what “I” am, or what my “self” is.20

This aspect of self-consciousness is very different from the way “self” was treated in previous chapters. The aspects of self treated in earlier chapters do not necessarily have anything to do with consciousness: all the operations described earlier are just computations. In particular, an AI does not need to be conscious to infer that it can control certain things and not others, or to develop behavioural mechanisms that ensure its survival, while even a simple AI system can and should have methods for evaluating the performance of “itself”.

If self is defined in this sense of self-awareness, it might in fact be difficult to defend any form of “no-self” philosophy, of which we have seen one version in Chapter 11. Descartes famously was absolutely certain that he could say “I am” because he “thinks”:21

[A]fter having reflected well and carefully examined all things, we must come to the definite conclusion that this proposition: I am, I exist, is necessarily true each time that I pronounce it, or that I mentally conceive it.

Yet, Descartes was quite wary of saying what he actually is:

I must be careful to see that I do not imprudently take some other object in place of myself, and thus that I do not go astray in respect of this knowledge that I hold to be the most certain and most evident of all that I have formerly learned.

The complexities of no-self philosophy largely come from the tension between these two viewpoints: It is intuitively clear that I am, but it is not clear what I am. (There can hardly be any difference between the “I” and the “self”, they are just two words for the same thing.22)

On the other hand, some would say that such self-awareness can be seen as a mental construction, even an illusion, just like control and free will. Our self-awareness could be based on a collection of the awarenesses of various sensory perceptions, with no special core that could be called “me”, or awareness of myself. Hume expresses this potently in a famous quote which is not unlike anything Buddhist philosophers might have said:23

For my part, when I enter most intimately into what I call myself, I always stumble on some particular perception or other, of heat or cold, light or shade, love or hatred, pain or pleasure. I never can catch myself at any time without a perception, and never can observe any thing but the perception. When my perceptions are removed for any time, as by sound sleep; so long am I insensible of myself, and may truly be said not to exist.

This suggests a no-self philosophy where self-awareness is nothing but a complex of various instances of sensory awareness, mistakenly leading to an illusory perception of a separate entity called “self”.24 In the absence of any perceptions, ultimately I may be said not to exist. Such no-self philosophy could be called ontological: it claims that the self does not exist at all. It does not merely say that self is not what it looks like, or that it is missing something, or that it is not too important; instead, it claims that self does not exist, period. While Hume may not have meant to go quite that far, many Buddhist philosophers do.25

Nothing is real?

Saying that the self is a mental construction, possibly an illusion, sounds quite radical. Well, how about going a bit further, and denying that anything really exists? While it is undeniable that there is some kind of experience of the world outside of myself, it is equally undeniable that this experience is not the same thing as the world outside. The conscious experience is—according to a conventional neuroscientific view—the product of complex information-processing of incoming signals. Actually, most of conscious experience has little to do with the world that surrounds us here and now, since conscious contents are often a product of planning, replay, and other kinds of thinking and imagination. The interesting thing is how people are misled into believing that this experience, this virtual reality, this simulation, replay, or planning, is actually the reality.

It should be easy to admit that when we plan the future, the planned events are just imagined, and not real. But the “unreality” of consciousness goes deeper than that: In fact, everything in our consciousness is a simulation, a virtual reality, constructed by our mind. This also includes your consciousness of everything you see, hear, feel, taste, and smell at this very moment. Any perceptual experience, as well as any thought, is simulation, or computation, and not the same as reality.26

This is just a rephrasing of well-known neuroscientific facts. As we have discussed several times by now, when you look at this text, your brain is doing complex computations based on the incoming information. Based on the results of those computations, it creates a conscious perception, which contains an image or a feeling of the world around you, including the book or the computer screen on which you see this text. The conscious experience is created by some quite quasi-miraculous mechanism, which science has not yet been able to explain—even saying that it has to be in the brain is speculative. But the important point is that what you see is the virtual reality, or the simulation in the brain, not the real world. The distinction between the world and your conscious experience is basically inherent in the very notion of “experience”. Although I have already said this in the beginning of this chapter, this point requires a longer explanation, so let me try.

Usually, you would say that you “see” this book (let’s just assume for the sake of simplicity that you are reading this text in a book). However, according to the conventional neuroscience viewpoint, what you’re actually conscious of is the interpretation created by your brain, not the book itself. The book simply reflects some photons emitted by a lamp or the sun, these photons enter your eye, and your eye sends electrical signals to your brain. Based on these electrical signals combined with the prior information about the world, your brain creates a virtual reality, including your perception of this book. Meanwhile, based on other sensory information, and again all kinds of internal information and processing, the brain creates your perception of your surroundings, your body, and indeed, your perception of your self.

I am not denying here that the book exists. I am merely pointing out that your consciousness, your sensory awareness of this book and everything else is created by your mind, presumably by some highly complicated process in your brain. You cannot really “see this book”, you cannot be “conscious (or aware) of the book”, you are only aware of the results of some computations performed in your brain, in which the book only plays the role of being the physical source of some radiation which was input to the computations.

The metaphor of virtual reality means that consciousness is similar to wearing virtual reality goggles which feed an input to your eyes which is so realistic that it looks almost like real. In the case of seeing this book, though, it looks exactly like real to you because you know nothing better: You have never seen anything which would be somehow closer to reality than this virtual reality. A number of science fiction movies are based on the idea that somebody could feed fake sensory information directly to your brain, and you would have no idea the sensory input is fake. Descartes already proposed that he cannot trust his perception because an “evil demon” might be feeding an illusory external world to his mind—which is precisely why he could only be certain of his own existence. Such claims lead to an extreme form of uncertainty regarding perception.27

My point is that something like that is actually happening to you all the time, according to perfectly mainstream neuroscience. I want to emphasize that I’m not trying to make some radical philosophical point here. There are others that will tell you that the world does not really exist, including proponents of some Eastern philosophical systems, such as Advaita Vedanta, or Mahayana schools of Buddhism, including Zen and Yogācāra.28I’m trying to steer away from such philosophical speculations about what exists, and merely point out some of the limitations of our perceptual and cognitive systems, in a way which is, I hope, acceptable, even if unpalatable, to most scientists working on those topics.

At the risk of repeating myself: Most neuroscientists would agree that sensory processing in the brain is producing an interpretation of the incoming input; they would further agree that the brain creates consciousness. Thus, the contents of consciousness are not a direct product of the world, let alone the same thing as the world; it is a construction, an interpretation created by the brain. Yet, we often have the intuitive feeling that the contents of consciousness are somehow identical to the contents of the outside world, which is not the case. Just studying an introductory course in neuroscience or in AI might be enough for many people to give up such an idea. Visual illusions, such as in Fig. 10.4 🡭 are one way of demonstrating how perception is different from reality.

The Belgian artist René Magritte has a famous painting called La Trahison des images, or “The Treachery of Images”.29 The painting consists of a picture of a pipe, with the text “Ceci n’est pas une pipe”, or “This is not a pipe”, written underneath it. The point is that the painting is just a picture, not the real pipe. While the artist’s purpose was to illustrate the deceiving nature of images, the painting illustrates the illusory nature of consciousness as well. Suppose you actually hold the pipe in your hand and look at it. What appears in your consciousness is a picture, a simulation, or a reflection of the pipe; it is not a pipe. Yet, we have the habit of thinking that the perceptual image is the real pipe, while in reality, it is only somehow indirectly related to the real pipe. Furthermore, the category of a “pipe” is just a mental construct. In this sense, perception is not the real thing; consciousness is not the reality.30

These philosophical points are not simply theoretical speculation: Our attitude to consciousness has a direct effect on suffering. Consider the example of the tiger I saw on TV: If I could somehow develop a different attitude towards the contents of my consciousness, seeing them as mere simulation, I might suffer less. This is precisely why some Buddhist schools claim that the outside world only exists in your imagination—or at least they recommend adopting such an attitude towards the world.31 In the next chapters, we will consider this and many other ways of reducing suffering by changing our thinking patterns as well as using meditation.