What is Privacy Engineering?
In this post, we will dive into what privacy engineering is, why it’s important, and some of the core use cases we are seeing that are enabled by privacy.
What is privacy engineering?
Let’s start with a definition: Privacy engineering the systematic application of engineering concepts for protecting sensitive information.
Privacy engineering is based on software and security engineering disciplines but differs in that it has effectively two equally important stakeholders- the first being business teams and applications that need access to the information, and the second being the customers whose personal data is being used. Business and consumer privacy use cases are increasingly blending together, and in 2019 Cisco ran a survey of 2,601 adults ages 18-44 and found that “over 32% of respondents both care about privacy, are willing to act, and have done so by switching to new providers over data-sharing policies”.
Today, developers on service teams at AWS and Google can instantly check out or query anonymized datasets to test ideas, without having to wait weeks or even months for compliance approvals to work with sensitive data. Investments in privacy technology have enabled these companies to learn from customers, innovate, and launch new products at a previously unheard of pace. For example, AWS currently offers over 200 different features and services, with over 27 (or 13%) of all services launched in 2021. This growth is largely enabled by wide access to anonymized data, yielding a strong understanding and signal for their customers' needs.
While many of today’s successful privacy programs rely on manual and time consuming processes, the effort required to continue to scale privacy- even at the most successful tech companies of the world, requires more automation and a new way of thinking. Gartner estimates that companies will spend $8 Billion worldwide on privacy tooling in 2022, and that in the next 3 years, 40% of privacy compliance technology will rely on AI.
How do we scale privacy to enable any company or developer to get privacy right? Privacy engineering is about making privacy an engineering problem- built into the fabric of developer code and workflows, and thus one that can be automated and scaled in the same way we have scaled building software.
Why privacy engineering is important
Privacy itself has many use cases- for businesses today, getting privacy right is becoming a critical part of earning Trust with customers, which can lead to long term relationships and willingness to share information in the future that can be used to differentiate and build better services. Privacy has become a powerful differentiator for the products we use every day- see Apple’s WWDC 2020 keynotes, or an example of how not to do it right.
For consumers, privacy enables Control- or the ability to determine how your information can be used and shared. Many companies today are building their business model on this concept- see Signal, DuckDuckGo, and Medium (which explicitly makes revenue from subscriptions, and does not sell users’ personal information).
Business models enabled by privacy engineering
In the past two years, we have talked to hundreds of companies about their use cases for privacy and the new business models and opportunities that privacy engineering can unlock. Here are some of the top use cases we have seen by industry.
Technology companies are looking for ways to enable broader and faster access to data while maintaining customer trust and privacy. Similar to the use case for enabling immediate developer access to anonymized data, building processes for faster data access and experimentation is viewed as a competitive advantage when bringing new services and features to market.
Financial companies are interested in creating marketplaces where algorithms can be developed on freely available synthetic data, and then sold or licensed to financial institutions that have access to the real data.
Health-tech companies are looking for ways to enable information sharing and monetize data while protecting the privacy of their patients, and minimizing biases that could be inadvertently learned by algorithms trained on shared datasets.