19This problem could be contrasted with the problem of whether a computer can see. Suppose a robot moves around in its environment, avoiding obstacles and performing some task thanks to input from its camera. Now, how would you answer the question of whether the robot is able to “see”? Most people, including scientists working on such computers, would casually say that the computer sees, for example, it sees the obstacles. If pressed hard on what that means, they would probably admit that the computer “does not really see”, presumably because there is no consciousness involved. What is very interesting is that this ambiguity is not usually considered a problem: it is rare that any serious debates are conducted on whether such a robot actually “sees” or not. When we talk about suffering in an AI, the situation is, in principle, quite similar. However, much more heated debates can be expected on the question of whether the AI actually suffers. This lack of clarity on whether an AI can suffer seems to be much more difficult to accept than in the case of seeing.