How does Gretel-synthetics leverage differential privacy?
Gretel-synthetics uses differential privacy to defend against memorization while learning on a private dataset. Imprecisely speaking, the output of a synthetic model trained over a dataset D that contained one occurrence of a secret training record X versus another synthetic model D1 that did not contain X should be nearly identical. Thus, we have mathematical assurances that our model did not memorize the secret.
Get Started
Ready to try Gretel?
Start making your job easier instantly.
Get started in just a few clicks with a free account.
Get started in just a few clicks with a free account.
- Join the Synthetic Data Community
Join our Discord to connect with the Gretel team and engage with our community.
- Read our docs
Set up your environment and connect to our SDK.