What is differential privacy?

Differential privacy is a framework for measuring the privacy guarantees provided by an algorithm. Through the lens of differential privacy, we can design machine learning algorithms that responsibly train models on private data. Learning with differential privacy provides provable guarantees of privacy, mitigating the risk of exposing sensitive training data in the synthetic data model or its output. Intuitively, a model trained with differential privacy should not be affected by any single training example, or small set of training examples in its data set.

Gretel is designed to help developers and data scientists create safe, artificial datasets with many of the same insights as the original dataset, but with greater guarantees around protecting personal data or secrets in the source data. Gretel’s implementation of differential privacy helps guarantee that individual secrets or small groups of secrets, such as a credit card number inside structured and unstructured data fields will not be memorized or repeated in the synthetic dataset. Gretel’s synthetic data library also helps to defend against re-identification and joinability attacks, where traditionally anonymized data can be joined with another dataset, even ones that have not been created yet, to re-identify users.

Get Started

Ready to try Gretel?

Make your job easier instantly.
Get started in just a few clicks with a free account.