Please use this identifier to cite or link to this item:
|Authors:||Witten, Ian H.|
|Abstract:||A central problem in machine learning research is the evaluation of proposed theories from a hypothesis space. Without some sort of preference criterion, any two theories that "explain" a set of examples are equally acceptable. This paper presents \fIcomplexity-based induction\fR, a well-founded objective preference criterion. Complexity measures are described in two inductive inference settings: logical, where the observable statements entailed by a theory form a set; and probabilistic, where this set is governed by a probability distribution. With complexity-based induction, the goals of logical and probabilistic induction can be expressed identically--to extract maximal redundency from the world, in other words, to produce strings that are random. A major strength of the method is its application to domains where negative examples of concepts are scarce or an oracle unavailable. Examples of logic program induction are given to illustrate the technique.|
|Appears in Collections:||Witten, Ian|
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.