1It could be argued that lack of a good model, or “inductive bias”, is another limitation. Inductive bias can refer to slightly different things: on the one hand, it is sometimes simply used as a fancy term for a Bayesian prior in a probabilistic model, but it can also refer to constraints that are more structural in the sense of, for example, the choice of the family of nonlinearities, regularization, or other computational structures used in a model (which could still, in most cases, be seen as Bayesian priors in a hierarchical Bayesian model). Basically, what we are talking about here is that the agent might not have a good model family from which to pick its model of the world, and might in particular suffering from overfitting (see footnote 4 in Chapter 4). I take here the viewpoint that, fundamentally, a good inductive bias is only necessary because the data is limited: if the data were infinite, the proper inductive bias could be learned from the data by testing the performance of the models on a new test set which was not used in the learning (Feinman and Lake, 2018). Therefore, I do not discuss inductive bias in any detail in this book, and subsume the problems due to lack of correct inductive bias under the heading of “insufficient data”.