Abstract
The construction of a procedure from a few examples of its
execution--a major requirement for practical programming-by-example--is
inevitably a drastically underdetermined problem. At the core of any
incremental programming-by-example scheme is a heuristic module that
creates and modifies the program, or "model," as it is formed. In
general there are countless different ways that the model might
plausibly be modified; the problem is to prune the set of candidate
models to keep it down to a reasonable size.
We develop a principled method for evaluating and comparing alternative
models of a sequence of actions. Models are finite-state automata and
therefore can contain branches and loops. Based on information theory,
the method computes the entropy of a model in conjunction with the
entropy of the sequence used to form it--a novel form of the "minimum
description length" principle. The idea is to measure a model's
predictive power, taking into account the extent to which it is justified by
the sequence that has been used to create it. The performance of the
measure is illustrated on test cases and accords with intuition about when
sufficient evidence has accumulated to prefer a more complex model to a
simpler one.
Notes
We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at digitize@ucalgary.ca