Novel differentially private mechanisms for graphs

In this paper, we introduce new methods for releasing differentially private graphs. Our techniques are based on a new way to distribute noise among edges weights. More precisely, we rely on the addition of noise whose amplitude is edge-calibrated and optimize the distribution of the privacy budget among subsets of edges. The generic privacy framework that we propose can capture all privacy notions introduced so far in the literature to release graphs in a differentially private manner. Furthermore, experimental results on real datasets show that our methods outperform the standard existing techniques, in particular in terms of the preservation of utility. In addition, these experiments show that our mechanisms guarantee epsilon-differential privacy for a reasonable level of privacy epsilon, while preserving the spectral information of the input graph.