36The traditional Buddhist viewpoint tends to emphasize that people are mistaken about the level of control and permanence, and it is enough to correct their “ignorance” or “illusions”. On the other hand, consider a super-intelligent agent which has no constraints regarding data or computation. It would presumably estimate uncontrollability and uncertainty correctly and accurately, without any illusions. But it would still have reward losses, and those reward losses might not even be particularly small, especially if the outside world is difficult to control (perhaps due to strong physical constraints in the ability of the agent to manipulate it) and exhibits a lot of randomness. So, it is not clear if suffering would be very much reduced by correcting “illusions” in the sense that the agent learns to make “optimal” inference (in the sense of probabilistic AI theory) with infinite data and computation. I would assume that the real goal of such Buddhist practice may rather amount to adopting reward expectations which are lower than what is objectively true. In this case, it would lead to increased happiness at the expense of slightly suboptimal inference—but note that such “suboptimality” refers only to the lack of optimality in maximizing rewards.