Skip to content

Guess Me If You Can: A Visual Uncertainty Model for Transparent Evaluation of Disclosure Risks in Privacy-Preserving Data Visualization

Minimization of disclosure risks is a key challenge in publicly available visualizations that can potentially reveal personal information. Such risks are inherently dependent on the amount of information that adversaries can gain by manipulating visual representations and by using their background knowledge. Conventional risk quantification models proposed in the field of privacy-preserving data mining suffer from a lack of transparency in letting data owners control privacy parameters and understand their implications for disclosure risks. To fill this gap, we propose a visual uncertainty model for letting data owners understand the relationships between privacy parameters and vulnerable visualization configurations. Our main contribution is a probabilistic analysis of the disclosure risks associated with vulnerabilities in privacy-preserving parallel coordinates and scatter plots. We quantify the relationship among attack scenarios, adversarial knowledge, and the inherent uncertainty in cluster-based visualizations that can act as defense mechanisms. We present examples and a case study to demonstrate the effectiveness of the model.

Aritra Dasgupta, Robert Kosara, and Min Chen, Guess Me If You Can: A Visual Uncertainty Model for Transparent Evaluation of Disclosure Risks in Privacy-Preserving Data Visualization, Proceedings IEEE Symposium on Visualization for Cyber Security, 2019. DOI: 10.2312/evs20191162

bibtex
@inproceedings{Dasgupta:VizSec:2019,
	year = 2019,
	title = {Guess Me If You Can: A Visual Uncertainty Model for Transparent Evaluation of Disclosure Risks in Privacy-Preserving Data Visualization},
	author = {Aritra Dasgupta and Robert Kosara and Min Chen},
	booktitle = {Proceedings IEEE Symposium on Visualization for Cyber Security},
	doi = {10.2312/evs20191162},
	abstract = {Minimization of disclosure risks is a key challenge in publicly available visualizations that can potentially reveal personal information. Such risks are inherently dependent on the amount of information that adversaries can gain by manipulating visual representations and by using their background knowledge. Conventional risk quantification models proposed in the field of privacy-preserving data mining suffer from a lack of transparency in letting data owners control privacy parameters and understand their implications for disclosure risks. To fill this gap, we propose a visual uncertainty model for letting data owners understand the relationships between privacy parameters and vulnerable visualization configurations. Our main contribution is a probabilistic analysis of the disclosure risks associated with vulnerabilities in privacy-preserving parallel coordinates and scatter plots. We quantify the relationship among attack scenarios, adversarial knowledge, and the inherent uncertainty in cluster-based visualizations that can act as defense mechanisms. We present examples and a case study to demonstrate the effectiveness of the model.},
}