Browser Fingerprinting: How to Protect Machine Learning Models and Data with Differential Privacy?
DOI:
https://doi.org/10.14279/tuj.eceasst.80.1179Abstract
As modern communication networks grow more and more complex, manually maintaining an overview of deployed soft- and hardware is challenging. Mechanisms such as fingerprinting are utilized to automatically extract information from ongoing network traffic and map this to a specific device or application, e.g., a browser. Active approaches directly interfere with the traffic and impose security risks or are simply infeasible. Therefore, passive approaches are employed, which only monitor traffic but require a well-designed feature set since less information is available. However, even these passive approaches impose privacy risks. Browser identification from encrypted traffic may lead to data leakage, e.g., the browser history of users. We propose a passive browser fingerprinting method based on explainable features and evaluate two privacy protection mechanisms, namely differentially private classifiers and differentially private data generation. With a differentially private Random Decision Forest, we achieve an accuracy of 0.877. If we train a non-private Random Forest on differentially private synthetic data, we reach an accuracy up to 0.887, showing a reasonable trade-off between utility and privacy.Downloads
Published
2021-09-08
How to Cite
[1]
K. Dietz, M. Mühlhauser, M. Seufert, N. Gray, T. Hoßfeld, and D. Herrmann, “Browser Fingerprinting: How to Protect Machine Learning Models and Data with Differential Privacy?”, eceasst, vol. 80, Sep. 2021.
Issue
Section
Articles
License
Copyright (c) 2021 Electronic Communications of the EASST
This work is licensed under a Creative Commons Attribution 4.0 International License.