A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms

dc.contributor.advisorFong, Philip W. L.
dc.contributor.authorAhmadinejad, Seyed Hossein
dc.contributor.committeememberFong, Philip W. L.
dc.contributor.committeememberSafavi-Naini, Reihaneh
dc.contributor.committeememberLocasto, Michael E.
dc.contributor.committeememberBauer, Mark
dc.contributor.committeememberDebbabi, Mourad
dc.description.abstractThe recent significant growth in popularity of Social Network Systems (SNSs) raised serious concerns regarding user privacy. One of such concerns, called inference attacks, is the leakage of users' private information from their public information. This dissertation identifies a more dangerous type of inference attacks where users' private information is inferred by third-party extensions to SNS platforms. SNSs provide an Application Programming Interface (API) that could be used by third-party applications to access SNS user profiles, and in return provide some functionality for the users. Systematic inference of user inaccessible information by third-party extensions from the information accessible through the SNS APIs is called SNS API inference attacks. Due to the large number of users who subscribe to third-party extensions, even with a meager success rate, SNS API inference attacks could violate the privacy of millions of users. Moreover, SNS API inference attacks could be used as a building block for further security attacks (e.g., identification attacks). This work first evaluate the feasibility of SNS API inference attacks by conducting an experiment where sample inference algorithms will be developed and executed against enough number of real user profiles and then their success rate will be assessed. Next, a view-based protection model will be proposed for the purpose of preventing SNS API inference attacks. This model allows users to share a sanitized version of their profiles with extensions. Sanitizing transformations must be designed to preserve both privacy and usefulness of the user profiles. The proposed model has a theoretical framework that defines measures to evaluate the effectiveness of sanitizing transformations. The theoretical framework will be paired with an enforcement model to show how transformations can actually be designed and sanitize user profiles. The enforcement model will include a declarative language for articulating transformations. Moreover, the enforcement model will have a model of computation that can describe transformations and access queries. The proposed model of computation has enough expressive power and meets the required properties. Finally, the proposed model will be evaluated by assessing the correctness of the theoretical framework and the enforcement model.en_US
dc.identifier.citationAhmadinejad, S. H. (2016). A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms (Doctoral thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca. doi:10.11575/PRISM/25087en_US
dc.publisher.facultyGraduate Studies
dc.publisher.institutionUniversity of Calgaryen
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.
dc.subjectComputer Science
dc.subject.classificationSocial networksen_US
dc.subject.classificationAccess Controlen_US
dc.subject.classificationInference attacksen_US
dc.titleA View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms
dc.typedoctoral thesis
thesis.degree.disciplineComputer Science
thesis.degree.grantorUniversity of Calgary
thesis.degree.nameDoctor of Philosophy (PhD)