Unboxing the "Black Box": Learning Interpretable Deep Learning Features of Brain Aging

dc.contributor.advisorFrayne, Richard
dc.contributor.authorSouto Maior Neto, Luis
dc.contributor.committeememberPichardo, Samuel
dc.contributor.committeememberSmith, Eric Edward
dc.contributor.committeememberHarris, Ashley D.
dc.contributor.committeememberBeg, Mirza Faisal
dc.date2020-02
dc.date.accessioned2019-11-28T16:46:52Z
dc.date.available2019-11-28T16:46:52Z
dc.date.issued2019-11
dc.description.abstractDeep learning (DL) algorithms are state-of-the-art techniques for automatic inference tasks like classification and regression in medical imaging and many other fields. Despite growing interest, DL models have had restricted implementation in practical settings as they are often considered to be “black boxes”. Their inner workings are not easily interpretable by humans, which in medicine has limited wider use. In this work, I apply DL models to predict subject age based on brain magnetic resonance (MR) data. While accurate predictions (< 2 years) are made, the purpose of this work is not to establish prediction accuracy but to better understand brain aging by studying the learned representations of the trained models. I use autoencoder-based models that enable translation between the domain of the model’s internal representations of the data, about which we have little understanding, and the domain of MR images, about which we have expert knowledge. The goal of this research is to investigate whether such DL models are capable of learning representations of age-related features similar to what is already known in literature. I show that such DL models, when trained to predict brain age, are capable of learning known features of brain aging, such as brain atrophy. In addition, this approach may potentially identify new features of aging on brain images.en_US
dc.identifier.citationSouto Maior Neto, L. (2019). Unboxing the "Black Box": Learning Interpretable Deep Learning Features of Brain Aging (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.en_US
dc.identifier.doihttp://dx.doi.org/10.11575/PRISM/37275
dc.identifier.urihttp://hdl.handle.net/1880/111255
dc.language.isoengen_US
dc.publisher.facultyCumming School of Medicineen_US
dc.publisher.institutionUniversity of Calgaryen
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.en_US
dc.subject.classificationNeuroscienceen_US
dc.subject.classificationBiophysics--Medicalen_US
dc.subject.classificationStatisticsen_US
dc.subject.classificationArtificial Intelligenceen_US
dc.subject.classificationComputer Scienceen_US
dc.subject.classificationEngineering--Biomedicalen_US
dc.titleUnboxing the "Black Box": Learning Interpretable Deep Learning Features of Brain Agingen_US
dc.typemaster thesisen_US
thesis.degree.disciplineEngineering – Biomedicalen_US
thesis.degree.grantorUniversity of Calgaryen_US
thesis.degree.nameMaster of Science (MSc)en_US
ucalgary.item.requestcopytrueen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ucalgary_2019_souto-maior-neto_luis.pdf
Size:
24.93 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
license.txt
Size:
2.62 KB
Format:
Item-specific license agreed upon to submission
Description: