7To get into more mathematical detail, Bayesian inference wants to compute the probability P(causegiveneffect), where P denotes probability. More precisely, this is a conditional probability, i.e. the probability of one thing (cause) given that another thing (effect) has been observed. This is the typical case of inference: we observe the effects and want to find the causes, or at least their probabilities. The celebrated Bayes formula then says the aforementioned probability is equal to P(effectgivencause) × P(cause)∕P(effect). Here, the term P(effectgivencause) can be computed from a physical model of the world implemented in your brain. P(cause) is the prior probability of a given cause; this is where the prior information about what typically happens in the world comes in. P(effect) is not so important because we are not comparing different effects, so it is constant, and it can actually be computed from the other probabilities by a simple formula.