Repository logo
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
Repository logo
  • PRISM

  • Communities & Collections
  • All of PRISM
  • English
  • Català
  • Čeština
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • Latviešu
  • Magyar
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Yкраї́нська
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Askari, Mina"

Now showing 1 - 3 of 3
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Privacy Consensus in Anonymization Systems Via Game Theory
    (2012-03-01T18:26:47Z) Karimi Adl, Rosa; Askari, Mina; Barker, Ken; Safavi-Naini, Reihaneh
    Privacy protection appears as a fundamental concern when personal data is collected, stored, and published. Several anonymization methods have been proposed to protect individuals' privacy before data publishing. Each anonymization method has at least one parameter to adjust the level of privacy protection. Choosing a desirable level of privacy protection is a crucial decision because it affects the volume and usability of collected data differently. In this paper, we demonstrate how to use game theory to model different and conflicting needs of parties involved in making such decision. We describe a general approach to solve such games and elaborate the procedure using k-anonymity as a sample anonymization method. Our model provides a generic framework to find stable values for privacy parameters within each anonymization method, to recognize the characteristics of each anonymization method, and to compare different anonymization methods to distinguish the settings that make one method more appealing than the others.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Towards theoretical and practical evaluation of privacy and utility of data sanitization mechanisms
    (2012) Askari, Mina; Safavi-Naeini, Reyhaneh; Barker, Kenneth E.
    Massive data collection, aggregation and analysis about individuals on the Internet raises the fundamental issue of privacy protection. Releasing of collected data is often beneficial for research, testing, marketing, decision making and data mining. However, published data can violate individual's privacy, especially when aggregated with other sources of data. In response to privacy concerns and to ensure privacy of the individuals in the published dataset, data are sanitized by applying specific operations on data prior to publishing them. The cost of performing the privacy operations on the original collected data to achieve privacy is the loss of some information. Hence, data utility is another important factor that should be considered in data sanitization mechanisms. In this thesis, we focus primarily on privacy and utility issues of sanitization mechanisms. There are several sanitization mechanisms with different notions of privacy and utility. To be able to measure, set and compare the level of privacy protection and utility of these mechanisms, there is a need to translate these different mechanisms to a unified frame­work for evaluation. In this thesis, a thorough theoretical and empirical investigation for evaluation of privacy and utility of sanitization mechanisms in non-interactive data r release is proposed by developing two fameworks. Furthermore, we use the specifications of several sanitization mechanisms, to evaluate our frameworks. We first propose a novel framework that represents a mechanism as a noisy channel and evaluate its privacy and utility using information theoretic measures. We show that the deterministic publishing property that is used in most of these mechanisms reduces privacy guarantees and causes information to leak. We also show that by using this framework we can compute the sanitization mechanism's utility from the point of view of a data user. By formalizing the adversary and data user's background knowledge, we demonstrate their great effects on these metrics. We use k-anonymity, a popular sanitization mechanism, as an example and use the framework to analyze the privacy and utility offered by the mechanism. We then provide a mining framework that can be specialized to specific scenarios -modeling privacy and usefulness notions and quantifying their levels for the given dataset. This framework uses a definition of utility of mining tasks that data providers can use to measure and compare the utility of data mining results obtained from the original and sanitized datasets. This will provide a decision support mechanism for data providers to select appropriate sanitization mechanisms. This utility definition is general and captures the information obtained by any data user. The power of the framework is in its adaptability to capture various notions of privacy, utility and adversarial power for comparing sanitization systems in a particular setting.
  • Loading...
    Thumbnail Image
    Item
    Open Access
    Utility of Knowledge Discovered from Sanitized Data
    (2008-09-30) Sramka, Michal; Safavi-Naini, Reihaneh; Denzinger,Jorg; Askari, Mina; Gao, Jie
    While much attention has been paid to data sanitization methods with the aim of protecting users’ privacy, far less emphasis has been put to the usefulness of the sanitized data from the view point of knowledge discovery systems. We consider this question and ask whether sanitized data can be used to obtain knowledge that is not defined at the time of the sanitization. We propose a utility function for knowledge discovery algorithms, which quantifies the value of the knowledge from a perspective of users of the knowledge. We then use this utility function to evaluate the usefulness of the extracted knowledge when knowledge building is performed over the original data, and compare it to the case when knowledge building is performed over the sanitized data. Our experiments use an existing cooperative learning model of knowledge discovery and medical data, anonymized and perturbed using two widely known sanitization techniques, called E-differential privacy and k-anonymity. Our experimental results show that although the utility of sanitized data can be drastically reduced and in some cases completely lost, there are cases where the utility can be preserved. This confirms our strategy to look at triples consisting of a utility function, a sanitization mechanism, and a knowledge discovery algorithm that are useful in practice. We categorize a few instances of such triples based on usefulness obtained from experiments over a single database of medical records. We discuss our results and show directions for future work.

Libraries & Cultural Resources

  • Contact us
  • 403.220.8895
Start Something.
  • Digital Privacy Statement
  • Privacy Policy
  • Website feedback

University of Calgary
2500 University Drive NW
Calgary Alberta T2N 1N4
CANADA

Copyright © 2023

The University of Calgary acknowledges the traditional territories of the people of the Treaty 7 region in Southern Alberta, which includes the Blackfoot Confederacy (comprised of the Siksika, Piikani, and Kainai First Nations), as well as the Tsuut’ina First Nation, and the Stoney Nakoda (including the Chiniki, Bearspaw and Wesley First Nations). The City of Calgary is also home to Metis Nation of Alberta, Region 3. The University of Calgary acknowledges the impact of colonization on Indigenous peoples in Canada and is committed to our collective journey towards reconciliation to create a welcome and inclusive campus that encourages Indigenous ways of knowing, doing, connecting and being.