Ecosyste.ms: Awesome
An open API service indexing awesome lists of open source software.
https://github.com/illidanlab/dpgan
Source code of paper "Differentially Private Generative Adversarial Network"
https://github.com/illidanlab/dpgan
Last synced: 3 months ago
JSON representation
Source code of paper "Differentially Private Generative Adversarial Network"
- Host: GitHub
- URL: https://github.com/illidanlab/dpgan
- Owner: illidanlab
- Created: 2017-08-30T15:10:49.000Z (about 7 years ago)
- Default Branch: master
- Last Pushed: 2018-11-29T19:29:33.000Z (almost 6 years ago)
- Last Synced: 2024-04-08T02:27:02.520Z (7 months ago)
- Language: Python
- Homepage:
- Size: 21.9 MB
- Stars: 66
- Watchers: 6
- Forks: 28
- Open Issues: 3
-
Metadata Files:
- Readme: README.md
Awesome Lists containing this project
- awesome-data-synthesis - DP-GAN 2 - Source code of paper "Differentially Private Generative Adversarial Network" - [Paper](https://arxiv.org/abs/1802.06739) (Data-driven methods / Tabular)
README
# Differentially Private Generative Adversarial Network
Liyang Xie, Kaixiang Lin, Shu Wang, Fei Wang, Jiayu Zhou## Paper link
https://arxiv.org/abs/1802.06739## Abstract
Generative Adversarial Network (GAN) and its variants have re-
cently attracted intensive research interests due to their elegant
theoretical foundation and excellent empirical performance as gen-
erative models. These tools provide a promising direction in the
studies where data availability is limited. One common issue in
GANs is that the density of the learned generative distribution
could concentrate on the training data points, meaning that they
can easily remember training samples due to the high model com-
plexity of deep networks. This becomes a major concern when
GANs are applied to private or sensitive data such as patient medi-
cal records, and the concentration of distribution may divulge criti-
cal patient information. To address this issue, in this paper we pro-
pose a differentially private GAN (DPGAN) model, in which we
achieve differential privacy in GANs by adding carefully designed
noise to gradients during the learning procedure. We provide rig-
orous proof for the privacy guarantee, as well as comprehensive
empirical evidence to support our analysis, where we demonstrate
that our method can generate high quality data points at a reason-
able privacy level.