Differential Privacy and Synthetic Text Generation with Gretel: Making Data Available at Scale (Part 1)
Much of organizationsā most valuable data exists in natural language formats- including customer feedback transcripts, call center logs, and internal reports and documents. Unfortunately, due to privacy concerns around the data, it is often locked in silos, accessible only to a few.Ā
Fortunately, synthetic data provides a simple, scalable process that organizations can use to make any natural language or tabular-based data available across their enterprises. Gretel fine-tunes state of the art large language models (LLMs) on sensitive data while providing mathematical privacy guarantees through a technique called differential privacy, applied during model fine-tuning. Gretel models can then be used to create any amount of synthetic data across a multitude of use cases- from enabling data access, analytics, and even creating training data for downstream ML/NLP tasks.
In this post, weāll dive into how Gretel can be used to create safe synthetic text data that can be used for a variety of use cases across an organization, and then compare the performance of the differentially private synthetic data against the real world data that it is based on.Ā
Key Takeaways:
- Enabling GenAI in key industries: Gretel's approach to Differentially Private (DP) LLM training is particularly impactful in finance, healthcare, and customer support sectors. It enables these industries to handle sensitive data securely, striking a crucial balance between maintaining privacy and utilizing data effectively.
- Near-perfect accuracy with privacy-focused methods: Our solution successfully achieved downstream accuracy within 1% of non-private models, with stringent privacy measures (Īµ = 8) in place that protect against model memorization or replay of sensitive data.
- Safely generating synthetic text: Using DP for text generation ensures that models do not memorize or replay sensitive data, a critical feature for maintaining confidentiality.
- Balancing training times and accuracy: Our research suggests that a strategic investment in additional compute resources to train a DP LLM is worthwhile.ā
- Analyzing and assuring data quality: Whether you're analyzing product feedback, creating training examples for Large Language Models (LLM) and Retrieval-Augmented Generation (RAG) training, or other general use cases, Gretel's SQS (Synthetic Quality Score) provides a reliable measure of data utility without compromising private data.
Industry Use Cases for Differentially Private Synthetic Text
In healthcare and life sciences, for example, synthetic data enables the safe sharing of electronic health records (EHR) to improve patient outcomes. By using synthetic patient data, such as initial diagnoses and patient descriptions of symptoms, researchers can explore new treatments and care strategies without compromising patient privacy. In the financial sector, synthetic data helps in building robust fraud detection systems, or chat bots built upon customer chat logs, while ensuring customer privacy remains intact. The broad applicability of synthetic data brings with it the challenge of maintaining quality and reliability, especially when used as a proxy for real-world datasets.
Gretel's Differentially Private Approach
Differential privacy is a technique that protects individual data points while enabling models to learn overall patterns and distributions. Gretel has pioneered applying differential privacy during language model training since our first release in March 2020, with over 900k SDK downloads of the gretel-synthetics library to date.
Our approach uses a technique called Differential Privacy Stochastic Gradient Descent (DP-SGD) algorithm (Song et al., 2013; Abadi et al., 2016). DP-SGD adds noise to the optimization process and clips gradients to prevent memorization of any single data example. This provides mathematical guarantees that no individual's personal information can be traced or revealed, while still allowing the model to learn trends, insights, and distributions from the real world data.
Differential privacy protects individual data points through two key parameters - epsilon (Īµ) and delta (Ī“). These parameters determine the strength of the privacy guarantee. Epsilon limits the maximum difference in output that can be caused by a single data point. Lower epsilon provides stronger privacy, making individuals less distinguishable. Delta represents the probability that guarantees could fail. Lower delta values result in tighter assurances.
For our differentially private model, we chose privacy parameters of Īµ=8 and Ī“=10^-5, resulting in solid privacy guarantees while retaining high data utility. For context, the US Census bureau uses an Īµ=17.14 for US person data (Census Bureau, 2021), and Google uses an Īµ=6.92 andĀ Ī“=10^ā5 along with federated learning for their next-word prediction models on Android devices (Google, 2022). Appleās Safari browser uses Īµ values between 8 and 16 (Apple, 2021). The chosen configuration with an Īµ=8 balances generating usable synthetic data without sacrificing individual privacy.
Gretel combines state-of-the-art large language models with differential privacy protections during training using the DP-SGD algorithm. Our approach embeds formal privacy assurances directly into machine learning pipelines. This allows securely unleashing the power of language models while protecting user privacy.
Gretel GPT combines state-of-the-art large language models with differential privacy protections during training. This enables high-quality synthetic text generation that approaches accuracy of real data, while rigorously protecting individual privacy. Because differential privacy is embedded directly into the training pipeline, the resulting synthetic data can be used without restrictions - including aggregating datasets or training downstream machine learning models.
In Part 2, we will dive deeper into testing differential privacy protections, as well as analyzing attempted attacks. To learn more, check out our video series on differential privacy, as well as practical āred teamingā tests and attacks on Gretelās tabular and language models. Stay tuned!
Evaluating Differentially Private Synthetic Text
We fine-tuned the Mistral-7B large language model on 1 million Yelp reviews to create two models:
- Non-private baseline
- Differentially private (Īµ = 8, Ī“ = 10^-5)
We compared synthetic text quality from each model by measuring:
- Evaluation loss
- Accuracy categorizing reviews by business type
- Performance on downstream tasks like analytics and ML training data
To evaluate how well synthetic text generalizes, we used Gretel's Synthetic Quality Score (SQS) evaluation metric tailored to compare against real-world text.
Key experimental parameters:
- Gretel GPT Service
- 8x A100 80GB GPUs
- 4-bit Mistral-7B Quantized Model
- Batch Size 32
- Learning Rate 1e-4
- DP-SGD for differential privacy
By benchmarking against a non-private fine-tuned model, we can quantify how closely differential privacy preserves accuracy and utility across tasks. For this experiment, we drew inspiration from insightful recent work by Yue et al. from the ACL 23 conference aligning with Gretelās innovative approach to differentially private language models.
Example training record (real world). Weāll choose a rating of ā3ā out of 1-5 because often a middle-level sentiment can be the most difficult to capture.
Category: Restaurants | Stars: 3 This place isn't to bad but it's really loud. Big hot spot for tourists and the food was meh, ok. I think for the area you could find something a little better, it reminded me of a chain restaurant with all of the "stuff" on the walls. Very busy place and not to original in my eyes.
Example differentially private synthetic generated text, matching category and stars from the review above- but using synthetic content from our fine-tuned model- :
Category: Restaurants | Stars: 3 Too much "Hurry up and order" for me. They are so into making the line move and each person works around the other that they mixed up a couple of patrons' orders while we were in line. I'll come back when they slow down because I love the food.
Evaluating Training Accuracy
We first compared training loss between the differentially private (DP) model and non-DP baseline. Initially, the DP model shows higher loss due to the complexity of learning under privacy constraints.
However, as training progresses, the DP model loss converges towards the non-private version. By the end, there is only a slight difference in training loss between the two models. This demonstrates the ability of differential privacy to enable effective learning over time.
Downstream Task Performance
Next, we evaluated the accuracy of each model in categorizing reviews by business type. The figure below visualizes downstream categorization accuracy at each training step.
Key Findings:
- Initially, the DP model sees an accuracy drop compared to the non-DP baseline.
- As training continues, the accuracy gap closes significantly.
- Eventually, the DP model performs within 1% of the non-DP model accuracy- while maintaining strong privacy guarantees (Īµ = 8, Ī“ = 10^-5) , nearly achieving parity on this practical downstream application.
The analysis confirms that although accuracy may be lower at the start, differentially private models can quickly adapt to match or exceed non-private models on real-world tasks.
Analyzing Synthetic Data Quality and Versatility
The true strength of synthetic data lies in its versatility for various downstream use cases beyond basic generation and classification.
To evaluate quality across a breadth of potential applications, we generated synthetic reviews using our differentially private model (Īµ = 8) prompted with category and rating tags. This ensures the distribution matches the real data. We then compared against the original reviews using Gretel's Synthetic Text Quality Score (SQS), which analyzes structure, insight quality, and distribution similarity.
Synthetic Quality Score Analysis
The results benchmark the differentially private synthetic text against real-world data across metrics relevant for diverse analytics, training, and business applications. This reveals the adaptability of private synthetic data to unlock value across a variety of practical use cases.
In the first page of the report above, we can see that 500 records of synthetic text sampled from the differentially private model are compared against 500 records of text from the real world data. With an SQS score of 86 and a text semantics similarity score of 94 when compared to real-world data, we can expect the synthetic text corpus to be suitable for various downstream tasks, including product feedback analysis and ML & NLP tasks such as training models.
Highly similar text semantic distributions in the PCA Scatter Matrix indicate that the synthetic model correctly learned the semantics and distributions of the real world data that it was trained on.
Interestingly, the differentially private model tends to generate more text than the real world training data- with an average of 14.7 sentences vs 13.3 sentences per review, and an average of 8.6 words per sentence vs 7.5. Note that this finding is different from the findings and observations from Yue et al, ACL 2023, where they observed a length truncation effect from DP data vs. the non-DP counterparts. This could be due to model generation parameters, or differences in the base Mistral-7B LLM used in our experiment when compared to GPT-2 used in Yue et al.
Interested in synthetic text quality research? Explore our journey in creating the open SQS metric and its methodology in our detailed documentation, complete with deep dives and code examples.
Efficient Compute and Costs
When scaling generative AI, cost and compute considerations are crucial - especially for large language models requiring substantial resources. Gretel's GPT employs the QLoRA technique to reduce precision to 4-bits. This allows generating high-quality synthetic data while only fine-tuning ~1% of total model weights.
With GPU acceleration, the overhead of differential privacy mechanisms like noise injection and gradient clipping is minimal. Additional training passes are needed to match non-private accuracy.
For example, processing 1 million reviews (632 MB) took 15.2 hours on 8x A100 GPUs. Using public cloud pricing at $15.90 per hour, this results in a compute cost of ~$240 per 1M reviews ($0.38 per MB) for differentially private model training.
The small subset of weights updated also leads to an efficient 22 MB adapted model, vs 632 MB of training data. This enables storage and transfer efficiencies worth further exploration.
Unlocking Data with Differentially Private Synthetic Text
Differentially private synthetic data provides a unique avenue for organizations to share sensitive data more broadly, enabling new analytics, models, and insights without compromising privacy.
Gretel GPT with differentially private fine-tuning is currently available in private preview to select partners. To learn more or discuss use cases for unlocking the value of your organization's text data securely, please reach out to us at hello@gretel.ai.