forked from tensorflow/models
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
remove all code related to differential privacy (tensorflow#6045)
- Loading branch information
1 parent
d32d957
commit 8690693
Showing
31 changed files
with
1 addition
and
5,412 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,22 +1,3 @@ | ||
# Deep Learning with Differential Privacy | ||
|
||
Most of the content from this directory has moved to the [tensorflow/privacy](https://github.com/tensorflow/privacy) repository, which is dedicated to learning with (differential) privacy. The remaining code is related to the PATE papers from ICLR 2017 and 2018. | ||
|
||
### Introduction for [multiple_teachers/README.md](multiple_teachers/README.md) | ||
|
||
This repository contains code to create a setup for learning privacy-preserving | ||
student models by transferring knowledge from an ensemble of teachers trained | ||
on disjoint subsets of the data for which privacy guarantees are to be provided. | ||
|
||
Knowledge acquired by teachers is transferred to the student in a differentially | ||
private manner by noisily aggregating the teacher decisions before feeding them | ||
to the student during training. | ||
|
||
paper: https://arxiv.org/abs/1610.05755 | ||
|
||
### Introduction for [pate/README.md](pate/README.md) | ||
|
||
Implementation of an RDP privacy accountant and smooth sensitivity analysis for the PATE framework. The underlying theory and supporting experiments appear in "Scalable Private Learning with PATE" by Nicolas Papernot, Shuang Song, Ilya Mironov, Ananth Raghunathan, Kunal Talwar, Ulfar Erlingsson (ICLR 2018) | ||
|
||
paper: https://arxiv.org/abs/1802.08908 | ||
|
||
All of the content from this directory has moved to the [tensorflow/privacy](https://github.com/tensorflow/privacy) repository, which is dedicated to learning with (differential) privacy. |
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
123 changes: 0 additions & 123 deletions
123
research/differential_privacy/multiple_teachers/README.md
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
130 changes: 0 additions & 130 deletions
130
research/differential_privacy/multiple_teachers/aggregation.py
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.