A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms
atmire.migration.oldid | 4025 | |
dc.contributor.advisor | Fong, Philip W. L. | |
dc.contributor.author | Ahmadinejad, Seyed Hossein | |
dc.contributor.committeemember | Fong, Philip W. L. | |
dc.contributor.committeemember | Safavi-Naini, Reihaneh | |
dc.contributor.committeemember | Locasto, Michael E. | |
dc.contributor.committeemember | Bauer, Mark | |
dc.contributor.committeemember | Debbabi, Mourad | |
dc.date.accessioned | 2016-01-18T18:19:41Z | |
dc.date.available | 2016-01-18T18:19:41Z | |
dc.date.issued | 2016-01-18 | |
dc.date.submitted | 2016 | en |
dc.description.abstract | The recent significant growth in popularity of Social Network Systems (SNSs) raised serious concerns regarding user privacy. One of such concerns, called inference attacks, is the leakage of users' private information from their public information. This dissertation identifies a more dangerous type of inference attacks where users' private information is inferred by third-party extensions to SNS platforms. SNSs provide an Application Programming Interface (API) that could be used by third-party applications to access SNS user profiles, and in return provide some functionality for the users. Systematic inference of user inaccessible information by third-party extensions from the information accessible through the SNS APIs is called SNS API inference attacks. Due to the large number of users who subscribe to third-party extensions, even with a meager success rate, SNS API inference attacks could violate the privacy of millions of users. Moreover, SNS API inference attacks could be used as a building block for further security attacks (e.g., identification attacks). This work first evaluate the feasibility of SNS API inference attacks by conducting an experiment where sample inference algorithms will be developed and executed against enough number of real user profiles and then their success rate will be assessed. Next, a view-based protection model will be proposed for the purpose of preventing SNS API inference attacks. This model allows users to share a sanitized version of their profiles with extensions. Sanitizing transformations must be designed to preserve both privacy and usefulness of the user profiles. The proposed model has a theoretical framework that defines measures to evaluate the effectiveness of sanitizing transformations. The theoretical framework will be paired with an enforcement model to show how transformations can actually be designed and sanitize user profiles. The enforcement model will include a declarative language for articulating transformations. Moreover, the enforcement model will have a model of computation that can describe transformations and access queries. The proposed model of computation has enough expressive power and meets the required properties. Finally, the proposed model will be evaluated by assessing the correctness of the theoretical framework and the enforcement model. | en_US |
dc.identifier.citation | Ahmadinejad, S. H. (2016). A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms (Doctoral thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca. doi:10.11575/PRISM/25087 | en_US |
dc.identifier.doi | http://dx.doi.org/10.11575/PRISM/25087 | |
dc.identifier.uri | http://hdl.handle.net/11023/2755 | |
dc.language.iso | eng | |
dc.publisher.faculty | Graduate Studies | |
dc.publisher.institution | University of Calgary | en |
dc.publisher.place | Calgary | en |
dc.rights | University of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission. | |
dc.subject | Computer Science | |
dc.subject.classification | Social networks | en_US |
dc.subject.classification | Privacy | en_US |
dc.subject.classification | Access Control | en_US |
dc.subject.classification | Inference attacks | en_US |
dc.title | A View-Based Protection Model to Prevent Inference Attacks by Third-Party Extensions to Social Computing Platforms | |
dc.type | doctoral thesis | |
thesis.degree.discipline | Computer Science | |
thesis.degree.grantor | University of Calgary | |
thesis.degree.name | Doctor of Philosophy (PhD) | |
ucalgary.item.requestcopy | true |