A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms
The recent significant growth in popularity of Social Network Systems (SNSs) raised serious concerns regarding user privacy. One of such concerns, called inference attacks, is the leakage of users' private information from their public information. This dissertation identifies a more dangerous type of inference attacks where users' private information is inferred by third-party extensions to SNS platforms. SNSs provide an Application Programming Interface (API) that could be used by third-party applications to access SNS user profiles, and in return provide some functionality for the users. Systematic inference of user inaccessible information by third-party extensions from the information accessible through the SNS APIs is called SNS API inference attacks. Due to the large number of users who subscribe to third-party extensions, even with a meager success rate, SNS API inference attacks could violate the privacy of millions of users. Moreover, SNS API inference attacks could be used as a building block for further security attacks (e.g., identification attacks). This work first evaluate the feasibility of SNS API inference attacks by conducting an experiment where sample inference algorithms will be developed and executed against enough number of real user profiles and then their success rate will be assessed. Next, a view-based protection model will be proposed for the purpose of preventing SNS API inference attacks. This model allows users to share a sanitized version of their profiles with extensions. Sanitizing transformations must be designed to preserve both privacy and usefulness of the user profiles. The proposed model has a theoretical framework that defines measures to evaluate the effectiveness of sanitizing transformations. The theoretical framework will be paired with an enforcement model to show how transformations can actually be designed and sanitize user profiles. The enforcement model will include a declarative language for articulating transformations. Moreover, the enforcement model will have a model of computation that can describe transformations and access queries. The proposed model of computation has enough expressive power and meets the required properties. Finally, the proposed model will be evaluated by assessing the correctness of the theoretical framework and the enforcement model.
Ahmadinejad, S. H. (2016). A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms (Doctoral thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca. doi:10.11575/PRISM/25087