University of Helsinki Department of Computer Science
 

Department of Computer Science

Department information

 

Plausible Prediction by Bayesian Inference

Henry Tirri: Plausible Prediction by Bayesian Inference. PhD Thesis, Report A-1997-1, Department of Computer Science, University of Helsinki, June 1997. 122+36 pages. <http://www.cs.helsinki.fi/TR/A-1997/1>

Full paper: gzip'ed Postscript file
Metadata: XML file

Abstract

The capability to perform inference with uncertain and incomplete information is characteristic to intelligent systems. Many of the research issues in artificial intelligence and computational intelligence can actually be viewed as topics in the ``science of uncertainty,' which addresses the problem of plausible inference, i.e., optimal processing of incomplete information. The various different approaches to model and implement intelligent behavior such as neural networks, fuzzy logic, non-monotonic (default) logics and Bayesian networks all address the same problem of finding an appropriate language and inference mechanism to perform plausible inference, needed to implement such activities as prediction, decision making, and planning.

In this work we study the problem of plausible prediction, i.e., the problem of building predictive models from data in the presence of uncertainty. Our approach to this problem is based on the language of Bayesian probability theory both in its traditional and information theoretic form. We study Bayesian prediction theoretically and empirically with finite mixture models. Such models are interesting due to their ability to accurately model complex distributions with few parameters. In addition, finite mixture models can be viewed as a probabilistic formulation of many model families commonly used in machine learning and computational intelligence. We first address the question of how an intelligent system should predict given the available information. We present three alternatives for probabilistic prediction: single model based prediction, evidence based prediction, and minimum encoding based prediction. We then compare the empirical performance of these alternatives by using a class of finite mixture models. The empirical results demonstrate that, especially for small data sets, both the evidence and the minimum encoding approaches outperform the traditionally used single model approach.

We then focus on the problem of constructing finite mixture models from the given data and a priori information. We give the Bayesian solution for inferring both the most probable finite mixture model structure, i.e., the proper number of mixture components, and the most probable model within the class. For general mixture models the exact solution in both problems is computationally infeasible. Thus we also evaluate the quality of approximate approaches.

The Bayesian predictive approach presented can be applied to a wide class of prediction problems appearing in various application domains, e.g., medical and fault diagnostic problems, design problems and sales support systems. Using publicly available data sets, we demonstrate empirically that Bayesian prediction with finite mixtures is highly competitive when compared to the results achieved with other popular non-Bayesian approaches using, for example, neural network and decision tree models. The Bayesian prediction method presented constitutes the kernel of the D-SIDE/C-SIDE software currently used in industrial applications.

Index Terms

Categories and Subject Descriptors:
G.3. [Probability and Statistics]: Probabilistic algorithms
I.2.3 Deduction and Theorem Proving [Artificial Intelligence]: Uncertainty, ``fuzzy,'' and probabilistic reasoning
I.2.6 Learning [Artificial Intelligence]: Concept learning, Induction

General Terms: Theory, Algorithms

Additional Key Words and Phrases: Bayesian inference, prediction, classification, intelligent systems


Online Publications of Department of Computer Science, Anna Pienimäki