Witten, Ian H.Conklin, Darrell2008-02-272008-02-271991-07-01http://hdl.handle.net/1880/46187A central problem in machine learning research is the evaluation of proposed theories from a hypothesis space. Without some sort of preference criterion, any two theories that "explain" a set of examples are equally acceptable. This paper presents \fIcomplexity-based induction\fR, a well-founded objective preference criterion. Complexity measures are described in two inductive inference settings: logical, where the observable statements entailed by a theory form a set; and probabilistic, where this set is governed by a probability distribution. With complexity-based induction, the goals of logical and probabilistic induction can be expressed identically--to extract maximal redundency from the world, in other words, to produce strings that are random. A major strength of the method is its application to domains where negative examples of concepts are scarce or an oracle unavailable. Examples of logic program induction are given to illustrate the technique.EngComputer ScienceCOMPLEXITY-BASED INDUCTIONunknown1991-439-23http://dx.doi.org/10.11575/PRISM/31146