Please use this identifier to cite or link to this item:
Authors: Witten, Ian H.
Manzara, Leonard C.
Conklin, Darrell
Keywords: Computer Science
Issue Date: 1-May-1992
Abstract: The information content of each successive note in a piece of music is not an intrinsic musical property but depends on the listener's own model of a genre of music. Human listeners' models can be elicited by having them guess successive notes and assign probabilities to their guesses by gambling. Computational models can be constructed by developing a structural framework for prediction, and "training" the system by having it assimilate a corpus of sample compositions and adjust its internal probability estimates accordingly. These two modeling techniques turn out to yield remarkably similar values for the information content, or "entropy," of the Bach chorale melodies. While previous research has concentrated on the overall information content of whole pieces of music, the present study evaluates and compares the two kinds of model in fine detail. Their predictions for two particular chorale melodies are analyzed on a note-by-note basis, and the smoothed information profiles of the chorales are examined and compared. Apart from the intrinsic interest of comparing human with computational models of music, several conclusions are drawn for the improvement of computational models.
Appears in Collections:Witten, Ian
Manzara, Leonard

Files in This Item:
File Description SizeFormat 
1992-477-15.pdf2.14 MBAdobe PDFView/Open

Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.