Unboxing the "Black Box": Learning Interpretable Deep Learning Features of Brain Aging

Date
2019-11
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Deep learning (DL) algorithms are state-of-the-art techniques for automatic inference tasks like classification and regression in medical imaging and many other fields. Despite growing interest, DL models have had restricted implementation in practical settings as they are often considered to be “black boxes”. Their inner workings are not easily interpretable by humans, which in medicine has limited wider use. In this work, I apply DL models to predict subject age based on brain magnetic resonance (MR) data. While accurate predictions (< 2 years) are made, the purpose of this work is not to establish prediction accuracy but to better understand brain aging by studying the learned representations of the trained models. I use autoencoder-based models that enable translation between the domain of the model’s internal representations of the data, about which we have little understanding, and the domain of MR images, about which we have expert knowledge. The goal of this research is to investigate whether such DL models are capable of learning representations of age-related features similar to what is already known in literature. I show that such DL models, when trained to predict brain age, are capable of learning known features of brain aging, such as brain atrophy. In addition, this approach may potentially identify new features of aging on brain images.
Description
Keywords
Citation
Souto Maior Neto, L. (2019). Unboxing the "Black Box": Learning Interpretable Deep Learning Features of Brain Aging (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.