Abstract
A central problem in machine learning research is the
evaluation of proposed theories from a hypothesis space. Without some sort of
preference criterion, any two theories that "explain" a set of examples
are equally acceptable. This paper presents \fIcomplexity-based induction\fR,
a well-founded objective preference criterion. Complexity measures are
described in two inductive inference settings: logical, where the observable
statements entailed by a theory form a set; and probabilistic, where this set
is governed by a probability distribution. With complexity-based induction,
the goals of logical and probabilistic induction can be expressed
identically--to extract maximal redundency from the world, in other words,
to produce strings that are random. A major strength of the method is its
application to domains where negative examples of concepts are scarce or an
oracle unavailable. Examples of logic program induction are given to
illustrate the technique.
Notes
We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at digitize@ucalgary.ca