Privacy
Quantifying PII Exposure in Synthetic Data
How to measure and minimize personally identifiable information (PII) risk in synthetic data.
Read more...Privacy-preserving AI development with Azure & Gretel
Leveraging Gretel's privacy-preserving synthetic data generation platform to fine-tune Azure OpenAI Service models in the financial domain.
Read more...Generate Differentially Private Synthetic Text with Gretel GPT
Safely leverage sensitive or proprietary text data for advanced language model training and fine-tuning
Read more...Red Teaming Synthetic Data Models
How we implemented a practical attack on a synthetic data model to validate its ability to protect sensitive information under different parameter settings.
Read more...Fine-tuning Models for Healthcare via Differentially-Private Synthetic Text
How to safely fine-tune LLMs on sensitive medical text for healthcare AI applications using Gretel and Amazon Bedrock
Read more...Gretel Unlocks PII Detection with Synthetic Financial Document Dataset
Gretel releases a new synthetic financial document dataset to empower AI developers in building customized and highly performant sensitive data detection systems.
Read more...Synthesizing Private Patient Data with Gretel: A Step-by-Step Guide
Create privacy-safe synthetic patient data with Gretel, ensuring compliance, secure sharing, and actionable insights for AI and machine learning in healthcare.
Read more...Test Data Generation: Uses, Benefits, and Tips
Test data generation is the process of creating new data that replicates an original dataset. Here’s how developers and data engineers use it.
Read more...Teaching large language models to zip their lips with RLPF
Gretel introduces Reinforcement Learning from Privacy Feedback (RLPF), a novel approach to reduce the likelihood of a language model leaking private information.
Read more...Gretel’s New Data Privacy Score
Gretel releases industry standard synthetic tabular data privacy evaluation and risk-based scoring system.
Read more...