Abstract
The information content of each successive note in a
piece of music is not an intrinsic musical property but depends
on the listener's own model of a genre of music. Human listeners'
models can be elicited by having them guess successive notes and
assign probabilities to their guesses by gambling. Computational
models can be constructed by developing a structural framework for
prediction, and "training" the system by having it assimilate a
corpus of sample compositions and adjust its internal probability
estimates accordingly. These two modeling techniques turn out to
yield remarkably similar values for the information content, or
"entropy," of the Bach chorale melodies.
While previous research has concentrated on the overall
information content of whole pieces of music, the present study
evaluates and compares the two kinds of model in fine detail. Their
predictions for two particular chorale melodies are analyzed on a
note-by-note basis, and the smoothed information profiles of the
chorales are examined and compared. Apart from the intrinsic interest
of comparing human with computational models of music, several
conclusions are drawn for the improvement of computational models.
Notes
We are currently acquiring citations for the work deposited into this collection. We recognize the distribution rights of this item may have been assigned to another entity, other than the author(s) of the work.If you can provide the citation for this work or you think you own the distribution rights to this work please contact the Institutional Repository Administrator at digitize@ucalgary.ca