Synthesize 2023: How Synthetic Data is Evolving Information Security
Video description
Exploring synthetic data's transformative potential in information security
More Videos
Read the blog post
Transcription
Al Golshan (00:19):
Next, I'd like to welcome our panelists for our Info Sec session. I want to thank Chris Hymes from Riot Games, one of our great customers, tons of great feedback for us. As well as panelist Alex from Remitly, Chris Wheeler from Morgan Stanley and Omer from Snowflake. This should be a really exciting conversation about how synthetic data is actually revolutionizing the entire cyber security industry, something close to our heart that we see quite often, and day-to-day basis. So I'd like to welcome the panel for this one.
Chris Hymes (00:47):
Hey, my name is Chris Hymes, I am the chief information security officer at Riot Games. And I'm super excited to help facilitate what will hopefully be an amazing conversation on the world of information security, data privacy, synthetic data and you name it. I have three awesome guests here with me and I'm going to let them introduce themselves. Maybe we'll jump to you, Alex.
Alex (01:11):
Hey, I'm Alex Maestretti, I am the CISO at Remitly. We are a financial services company for immigrants and their families.
Chris Hymes (01:19):
Awesome, thank, Alex. How about you, Omer?
Omer (01:21):
Yeah. Hi, everybody. I'm glad to be here, I'm Omer Singer. I lead cyber security strategy at Snowflake. We are a data cloud company and playing around with synthetic data stuff so I look forward to chatting about it.
Chris Hymes (01:37):
Thanks, dude. Chris, how about you, man?
Chris Wheeler (01:40):
Hi, everyone. My name is Chris Wheeler, I am a cyber security specialist at Morgan Stanley. We are a large financial service.
Chris Hymes (01:46):
Cool. Well, what you have here are four people who are not experts in moderation, nor giving talks. But we're hopefully going to just have a pretty good conversation amongst us. And I think the area I wanted to start first, there is a lot of confusion all the time around like, "What the heck do security teams do nowadays? How do they blend and how are they different from data privacy teams and compliance teams, and a modern enterprise?"
(02:13)
So I'm just curious to get each of your perspectives who lead teams and have led teams in the past at a lot of different companies. What is the security team's role in a modern enterprise? Maybe Chris, we'll start with you since you were last last time.
Chris Wheeler (02:28):
Sure. So for our company, imagine we're doing things all over the business, things proactively, things in response to giving events, fraud prevention, cloud, there is all sorts of different things that a security team is being asked for. More and more we're seeing that be in both cloud and on-premises environment. So really all over the map and a mix of, like I said, proactive and responsive type capabilities.
Omer (02:54):
Yeah, and I want to jump in on that because I think that proactive could be tactical in saying, "Okay, there is a certain threat that we want to be proactive in detecting and mitigating it." But I think there is also a proactive strategy that we see emerging where when it was a simpler time when we were operating on-prem with a perimeter, remember those days?
(03:14)
So I think the task of the security team, the strategies were around that environment and we had pin points, we had servers, we had the network. And then this cloud thing started happening. And I think there was a point in time where there was a sense that the cloud would have some built-in security to it because it was being provided as a service. And so the security team could keep focusing on what it's been doing. And while cloud is delivered to us as this secure service environment, the part the company could just use.
(03:47)
And then we had a rude awakening where actually that's not the case. And there is this shared responsibility model, and we do need to be thoughtful and have a strategy around, "How is the security organization?" We're going to lock down this cloud environment and monitor it, be ready to respond to threats and to breaches in the cloud environment.
(04:06)
And you might say I'm crazy and naïve now, why would anybody think that? But actually, if you look at how many security teams today think about data and data cloud, data platforms supporting all the interesting things that the enterprise is doing with data, I think there is some of that naivete there as well and an assumption that there is our back and these environments are pretty locked down. And I think there is now this awakening around data where we do need to have a data security strategy. And how do we ensure that the right people see the data and they're not abusing their access and data we care about is not leaving the environment. And so it is interesting, and there is a lot of data security tooling that's coming in, but also this synthetic data stuff. So, maybe in terms of strategy we're thinking about that area as well, now.
Chris Hymes (05:01):
Alex, what do you think?
Alex (05:02):
Yeah, I think a lot of that resonates is those are commonalities across a lot of different approaches to security. But I think from the CISO perspective, starting from where your customer is and what business that you're in is going to help define how your security team approaches and what the scope of your responsibilities are. So, Chris H., I know that you've got the experience at Hulu, Netflix, Riot, your entertainment companies. The business consumer is a different flavor than Fentec, business to consumer, different flavor than business to business.
(05:31)
And so each of those customer sets are going to have different relationships with security and with privacy. I would say in the Fentec space, I think consumers resonate with privacy probably pretty strongly and obviously care about security but maybe more in the abstract. The consumers are going to rely on regulators to really represent them in terms of technical security in a lot of cases. So you've got a regulatory landscape of basically proxies for your customer. And you'll be doing a lot of work with them from a governance risk-and-compliance perspective, versus a B2B rate where you've got a sales enablement feature around your security program where you're trying to make your partners comfortable that you're doing your best on behalf of their customers.
(06:12)
So I think a lot of that drives it as "What business are you in?" understanding profit and loss and go-to-market strategy, all those things around how your company operates and then really getting empathy for the customers that you're serving, and then backing into how you've met the more tactical things on locking down data, increasing privacy, all those areas.
(06:32)
And so certainly, yeah, to tie it back to the synthetics thing, I think across a lot of those sales enablement rate, certainly in the security space thing, is an interesting one. For synthetics it's very hard to test security products on your own raw data. And then also, in more the B2C side, can you not share consumer data with partners? Can you not share it with your pre-prod environments? Can you do more to maintain the privacy boundaries of those consumer data items?
Chris Hymes (06:59):
So interesting, because Alex, you were giving us some use cases and [inaudible 00:07:04] talking about privacy quite a bit. And Omer, you were talking about the bit of privacy and data usage. And the word regulatory came up a couple of times. I know within Riot I'm not only accountable for security, but I'm also accountable for data privacy. And I'm curious if you all have seen data privacy and security coming closer together within your companies and how you've been tackling some of the data privacy and security related issues together, in relation to some of the regulatory compliance in things we've been seeing happen across the world.
Omer (07:37):
Yeah, well I can share that one of the kinds of business customers that we support are advertising companies. And they are really trying to find new ways to collaborate in a way that maintains that privacy. Because if they don't want them, they can't collaborate at all. But there is so much value in being able to understand their customers in a combined way. So it's very much a business enabler. And yeah, I think cyber security has a role to play. We have our experience in this space and also thinking about adversaries and how that privacy might be targeted and abused. So if we can be a voice in that conversation, then absolutely, I think that's a role we should be playing.
Chris Hymes (08:22):
How about you, Chris?
Chris Wheeler (08:24):
Yeah, from a financial services perspective, this is baked into our DNA. It's something that has been going on for years, everything from the PCI up to the much more modern regulatory standards. I think it's almost second nature and we have a lot of these controls built in around who can access data, how it will be reported. And legal review is a part of just about everything that we do.
(08:44)
So, very excited to see some development on this front where we might be able to do the pseudo-anonymized data sets or it could help security teams, which are particularly needing to test for that prod-like data, to evaluate the efficacy of controls. And also, just, even if it's not a control itself, how do you present that to an analyst? How do you test and train? It's increasingly important for our teams that are dealing with these multiple data sets and putting it-
Alex (09:14):
Yeah, and I think about it. You can't have privacy without security. So security is a prerequisite. You have people to safeguard the data you've been entrusted with, but security alone is not enough for privacy. So you need, starting with good data governance, you need to articulate the justification for why you're collecting a given piece of information. Have retention schedules for why you're keeping it, how long and wash it through that whole life cycle.
(09:40)
So I think there are a lot of similarities in terms of rolling out a security program and rolling out a privacy program. So we also have them both within my organization, but I do try to draw a distinction in terms of, they are different programs. Different disciplines require learning on my part to come up to speed on some of the privacy concepts and the regulatory frameworks. I'd say the regulatory landscape is probably leaning more into privacy. Especially as we have more consumer protection boards in various countries across the world. And so there is a real recognition that folks care about their privacy and that we need some good frameworks, that we're all talking the same language in terms of what that looks like.
(10:21)
I think one of the foundational problems I've seen pretty much everywhere that I've worked, one I've talked to is the data inventory problem, finding all of your data and classifying it with it's privacy sensitive. And so some of the machine-learning AI, even the generative space for the classifiers that go out and help you do that, I think is super interesting. But it has been expensive in the past. Some [inaudible 00:10:43] tools were quite expensive. I think the cost has come down a bit. As the technology advances it becomes more tractable to run those at scale or sample across your space. But yeah, building those data maps and things is step one for data governance that I think a lot of folks still need to do.
Omer (11:01):
Do you see that happening with security programs or is that data classification more in the realm of the CIOs or in the data team, and then it's on them. Because I think they care about it, too, right?
Alex (11:17):
Yeah.
Omer (11:17):
Who is responsible for that?
Alex (11:18):
Yeah, I think it's going to depend on your governing structure. I think there is a number of places that can benefit from it. At least the way that I've thought about it is oftentimes you'll have a data engineering team that has the data warehouse, the data lake, those things. Maybe you have a different persistence group that does the operational data stores. And then you probably have some stuff in the corporate world. So you need something that federates all those things up to a common view. And it's going to depend on where the expertise and momentum is in that organization.
(11:49)
So yeah, if the scope of your data teams is across all those different domains, then it could be there. Otherwise you might want some dedicated data governance or privacy function. And then also, in the financial space you're always thinking about the three lines of defense. So, who operationally implements it? Who oversees that? And then who is the final checker on it? So you're going to have different groups with some separation of concerns to do all three of those functions.
Chris Hymes (12:16):
Actually, it's interesting, Alex, because it comes back again, what's the role of a security team in a modern enterprise? You have some enterprises and teams, and there is no right or wrong answer. People come in and they check a little bit more, they just do protection. And then there are teams that also say, "Hey, if I'm going to ask you to do something and implement something, I'm going to write the tool or provide the tool for you to do it."
(12:39)
And that's where I've seen some delineation with where privacy engineering might sit within security. And you might have privacy reviewers and auditors that might sit in the compliance organization. How have you all seen that working? Or how does it work within your companies?
Alex (12:54):
Again, I think it's an interesting one for security. You want to be an enabler, so if you're going to ask, you're going to set a security bar or privacy bar, you want to help your company to meet that. But then when you think about the three, the Y-model defense, you need some separation on the maker and the checker. And so that is important to strike that right balance and you build the tool and then you hand it off to someone else to operate. And then you're more on the checking and you have internal audit to be your third line.
(13:20)
So yeah, I think that's generally been our approach. So we IPO'd about a year ago. Pre-IPO, the infrastructure, security and IT teams all reported to me. And so we worked together to do that enablement function with a focus on securing the customer. Post-IPO, especially the regulars, want to see more separation of those duties so we broke those things apart.
(13:40)
All right, so I think as a company matures you're probably going to see more specialization and more of those different layers stratify out. But if you're a 10-person startup then it's going to be probably, mostly your engineering or he's going to be doing all those things. If you're J.P. Morgan, then you're going to have a much larger staff to handle the separation of duties.
Chris Hymes (13:58):
What about you, Chris? How does it work at Morgan Stanley?
Chris Wheeler (14:01):
Yeah, I was going to say, Alex, who stepped in, really nailed it first line and second line, third line controls. I'm sure you can gather from some of my commentary on more of the first line of controls there, on the more technical side. But then as you step up, you're getting people that are looking at those higher-level metrics and how they might, everything from how many true-positive incidents certain organizations churn through, to where we might be vulnerable to a particular software vulnerability, those second-line controls there and then there are those checks on audit and there's third-line control.
(14:31)
So that's the tried and true model that Alex has alluded to there, as well. It was interesting to me, too, during the panel, we stepped through that with me starting down there with the technical on the cloud and "Here's how it's looking." And then Omer went up to the strategy. And then Alex, at the CISO level, you can see some of those things manifest [inaudible 00:14:49].
Chris Hymes (14:51):
Yeah. So, Alex, you talked about a data governance being a big challenge. I think even from a security perspective, the inventory and understanding your environment has always been challenging. It's getting easier again with more cloud-based tools and compliances code, networking-use code, et cetera, but it's still challenging. What would you all say right now? Is it data governance and understanding where your data is, and data inventory is the biggest challenge when it comes to protecting data and things like that in your environments?
Alex (15:26):
I think it is foundational and I agree, your inventory example from security really resonates. Back in the day, I remember you would have network segments behind [inaudible 00:15:37] that you didn't know about. And so as a security person, you'd have all these computers in an entire building. And it was pretty embarrassing when you didn't know that those existed. And so now it's much better with the cloud because you can pull your inventory from the APIs.
(15:53)
Yeah, I think the same advantages apply to the data inventory in general, if you're in a cloud platform. The APIs are there. Most cloud platforms are creating new ways to store data at a pretty rapid place, so it's hard to keep up with where all those things are. But yeah, still those tools, they give you an aggregated view. Especially if you're a multi-cloud or maybe you have some on-prem still. Or you've got your general cloud platform and you've got SaaS platforms. Getting all those things together in one, common view, I think, is still challenging.
(16:25)
So I think it could be fair to call that the biggest challenge. I think the data governance piece built around the inventory is broader than that in terms of, what are acceptable uses for data? How do you balance the needs of the business or the desires of individual employees to have broad access to data with what the regulators, and therefore your customers, really want to see in terms of controls around them? And so, synthetics will potentially offer a double win there where you don't have to have as much of that concern over controlling the data, and you can let folks run wild in it.
(16:59)
I think our challenge has been convincing the data analysts and engineers, and whatnot, that this synthetic data will serve their needs. And so, anytime that we can give them small examples of that, where they get little wins and good reinforcement, I think that's going to be the key to unlocking a lot of it.
Omer (17:14):
And Alex, I think that's interesting because we all probably recognize that prevention is never going to be 100%. And we need to understand not just where the sensitive data is and what are the policies governing who can access it, but also looking at who is actually accessing it and looking at it from the user's perspective, and who is touching what data and where are they moving it? And taking that user perspective, use-case perspective, monitoring the queries, monitoring the data movement and taking action based on that.
(17:47)
And I think you raise an interesting point. I haven't thought about this before, but if you do build a good understanding of who is using what data, then you can start recommending, "Hey, it looks like you're using a lot of this kind of data. And it's cool, but you're introducing quite a bit of risk by moving this stuff around. And actually, we have a pretty darn good synthetic alternative for this data set. So can we suggest that you use this?" That would actually be a pretty good way to get teams more comfortable relying on synthetic data where it makes sense.
Chris Hymes (18:22):
Now, I know some of us have more experience and less experience with synthetic data. But I think we all understand the value and the premise of where it could be, and how it could really help security data and privacy. Have you all tried or experimented with within your organizations, any use-cases for synthetic data within the security space?
Omer (18:44):
I can share from my experience where tool evaluation was a consideration. And at Snowflake we really believe that the security teams are going to want to use cloud data platforms for different security use cases, including threat detection, threat hunting, instant response. They're going to want to do it at the scale, but we recognize that it's new for many security teams. And I think synthetic data is an opportunity to take something that's working in other areas and apply it to cyber security, potentially to solve this challenge of how do you evaluate these new tools? Because it's an exciting prospect.
(19:27)
The scale and the analytics capability, that's exciting for the security team. But how do you cut through the noise and what the vendors might be telling you, and promising you the moon? And you really need to put it to the test. I think that the reality is that different platforms are built for different use cases and certain things will fail at certain scales, at certain latencies, certain kinds of questions. And so once you do have a sense for how you're thinking about your next generation of threat detection, incident response, security analytics, looking at synthetic data is potentially helpful.
(20:09)
So for an example that we did, is we had some cloud data and we had 10,000, say, records of AWS activity logs. And we wanted to see what would performance look like when we try to do this kind of threat hunting and multiterabyte scale. And where we couldn't use production data for that evaluation, the synthetic data was helpful. But I got to share from the experience, it was less magical than I expected. I thought that here is a little tree, you plant a little seed and it grows into this unlimited amount of verbalistic data.
(20:47)
The reality is it does take work. And in the beginning, the data was too similar to the source and we found some strings that shouldn't have been in there that we couldn't use for the POC. And we needed to go back and redact those. But again, no magic here. When they were redacted it was redacted too hard, to just X out. Well okay, let's find a balance so that the results we get are going to be representative of the kind of performance that we could expect at production scale. But the potential is exciting, and going from POCs that might take months to POCs that could take days at realistic scales without risking exposing production data, it could definitely be the future of POCs for security tooling.
Chris Wheeler (21:31):
Yeah. I was just going to say I haven't done it. And I won't say there is nowhere in Morgan Stanley that's doing this. I'm sure some part of the business is experiencing with synthetics. But my experience has mostly been personal use. And one of the things I was trying out was how to generate Windows and net logs that were of interest. And the thing that I was realizing as I walked through this was how much of those more categorical features didn't map so neatly. So it almost goes in the masking thing. If you mask the term administrator, it means something completely different to a security analyst than the original term.
(22:06)
So I really had to think through what features I would really want to extract. It wasn't as simple as just feed it in and then have it spit out representative logs. And that just speaks to some of the challenges we see with synthetics and info sec is the data is so interrelated in different data sets. And so I went by a process of net log is one thing, but realistically it's for much narrower use-cases. Things like if you're looking at packets there is a lot more discreet features you can extract and numerically related ones that synthetic data might be a lot more suited for.
Omer (22:39):
Chris, it sounds like you went all the way like synthetic PowerShell event logs, probably the craziest thing.
Chris Wheeler (22:46):
I tried but it did not go well, so it's lessons learned.
Alex (22:53):
Yeah, so it makes me think of how could we modify and train additional layers on these models to make them more domain specific. So I think whether it's CloudTrail, PowerShell, are we drawing from the right corpus or training data, or are there ways to customize and tweak these models?
(23:12)
I just started reading some of the background papers and things. I'm going to try and get smart on how these things are actually operating. Another example that I'll throw out there that was exciting when I first saw it on Twitter, which was cherry picking these positive outcomes, was like writing IM code, like Egress access policies. Because that's a big burden on the security team to double check those, make sure those are correct. And here you could write in English language your intent, and it would produce this IM policy. It can produce code like Python code and things like that. But their training corpus is probably like Stack Overflow which has all these very bad examples. There are all these examples where they work but they are not production grade, or they leave out certain important pieces as a teaching moment.
(23:57)
So I think that is one of the challenges with some of these generative models is you don't know the correctness of the answer. And the answers are also non-deterministic. So I like to joke that I could just retire and replace myself with a GPT that talks vaguely about risk. But I think that would be just as frustrating as the real CISO, because you get a different answer each time you ask a question. So how can we leverage some of these tools to plug the caps in judgment but not create a bigger problem?
Chris Hymes (24:27):
Yeah, I think ever since ChatGPT came out, unlimited CISOs have been trying to replace themselves. But it hasn't worked yet, but we'll see where that goes. So we've been talking about some of the interesting use-cases, more on the product-testing side. Obviously one of the hardest things within security, I happen to think, is actually threat detection because many of the threats could look like a normal neural behavior from somebody.
(24:56)
Do you think there could be any place in creating these data sets, replicating logs, using synthetics to help us actually run through some ... I'm not going to call them red team exercises, but test the ability of our tools and detections to see if there is anything malicious in our environments?
Chris Wheeler (25:12):
Absolutely. I think there is plenty of potential here. I think one of the things I found interesting doing a literature view out of this was just the amount of data sets that are available. It's a very low number if you look at some of the research on this, and you look at that. And I think I counted 20, maybe 21 that were available over the last 20 years. And you think about that as a info sec professional and you're like, "That's 20 different networks? Am I even get a representative sample right there?"
(25:41)
So I think one of the things that would be exciting in this space is to get more of those larger data sets and get some community validation around those things, so that they could, to your point, be a common set of test data for if you were putting out say, a network sensing product, a network IDS. These are the type of attacks you would want to detect. They are contained within this data set, you know they are. And building some standards around those things would be a really cool thing. But due to all the challenges we've been discussing, it will take a lot of validation by security professionals to make sure that's actually representative, and of course, by vendors and others that are producing those tools.
Omer (26:19):
Yeah. I think that's very true and I think you have to have some continuous synthetic generation, otherwise you over fit. And yeah, we detect that, well sure, but will we detect the next one? And I think the parallel that we can look at, is if you look at Endpoint protection and the move from signature-based detection to what the CrowdStrikes of the world do today, the reason why they were able to train the models to do it successfully is because they had access to millions and millions of malware samples. And only at that scale were they able to learn what malware looks like and move from writing very specific new signatures. And I think where we are with threat detection today outside of the Endpoint is still in that signature. We've tried to keep up with the attack by pumping out signatures. It didn't work then, it's not going to work now.
(27:06)
And so, I'd love to see somebody take that on because, Chris, I think that's exactly right. There's a real shortage and hopefully synthetics come up so that we can, yeah, put the machines to work for a broader set of detection use-cases.
Alex (27:24):
Yeah. I recall when I was first transitioning out of government into private sector. And Apple was the first wave of ML-focused detection tooling, so I was super anxious, "Oh, okay, cool. This sounds very interesting. Can I just throw data at this and have it tell me when stuff looks weird?" But as I picked through the products, or at least the tooling at the time, a lot of it was they might use some ML techniques to analyze the data. But then they would actually use that to write a Heurist, write a rule, a signature and apply that in prod at that scale and throughput. And very few of them were actually tuning models on your data. It was more a tool they'd use on the backend and then apply these.
(28:05)
And so circling back to it now, it seems like, again, in my limited understanding of some of these generative models, the large language models, when you get these order of magnitude step-ups, you get entirely different behaviors. So it's not that the technology is necessarily new, but it seems like we're getting close to that point where, yes, you could just throw raw PCAP at the thing and it would extract every possible dimension of it and then train on that.
(28:27)
And so you're not having to define features, which is basically your expert opinion saying these things are relevant. It's going to look at all the stuff. And I guess there is still the attention network layers that you're going to help point in the right direction, things like that. But if you could get to that point where it's bringing you insights, versus you training your insights to go bring back, things like that can be very interesting.
(28:48)
I think as alluded to, you need large volumes of data, and lots of dimensions and a lot of compute to put that together. And I don't know that we've tried that just yet on the security space, but it could be interesting. I certainly think about, adjacent to security, the platform-abuse space, all the bots that are sending traffic against any public website, trying to stuff credentials or scrape pricing or whatever it is. Could you throw all the dimensions in an HTP header packet combination and have it tell you that it can tell what a tool is? Because there you've got a lot of data, high data rates, you've got access to the attacker tools, generally. So you could run them right to develop labeled data sets and then see what it could do better than the existing Edge detection things.
Omer (29:36):
Hopefully there is some entrepreneurs listening and saying, "I'm going to build a company to do that." Because there is definitely progress being made. Just actually this morning, on Twitter, speaking of Twitter, I saw somebody released Infinite Seinfeld where they've trained a model on all the Seinfeld episode. And now there is just a new Seinfeld episode that never ends. And they're having conversations. And I saw they're talking about some candy bar that has two flavors. And Elaine brought it up and George is reacting to it. And it feels like a Seinfeld episode, it was never in any episode of Seinfeld. So if we could have networks like that with some subtle attack, some lateral movement going on in the background, yeah, that would be a really cool thing.
Chris Hymes (30:17):
So it sounds like the person who wrote that Seinfeld bot should be working information security. That's what I'm hearing from all this.
Omer (30:24):
Yeah, yeah, if he wants the raise, yeah.
Chris Hymes (30:26):
So cool. So we have about five minutes left. So I want to just wrap it up with just a closing question. So, hey, each of you, maybe we'll start with you, Chris. What problems do you really see for the info sec industry over the next two to three years?
Chris Wheeler (30:44):
Yeah. So I don't know if it would necessarily be a foreign problem, but it'd definitely make a difference. So we talked a little bit about migration to cloud and use of ML. We see a lot of consolidation around some of the larger vendors. And from a security-team perspective, we're a lot of times learning to understand those signatures and may not be inspectable to what Alex was talking about. So it's a different way of working in theory that is better, it's more automated. It's moving us ahead. But it's also tougher sometimes for those newer analysts to understand where those came from and what they need to do.
(31:18)
The other thing that I think is probably no surprise is just all these exciting technologies for automation and good purposes being used against us. On the defensive side there is plenty of stuff about GPT algorithms being used to generate phishing emails which is going to keep me up at night, those type of things. And just like the scale of these cloud platforms, if you can use them for good they can also be abused. So those are the two main things that I see in this space.
Chris Hymes (31:42):
How are you, Omer?
Omer (31:45):
I think it's this partnership that's emerging between cyber security teams and data teams, so that the security team can both understand what's happening in the data platform and also take advantage of innovation and tooling in the data space. We talked about synthetic data and data platforms and all these different ways in which the security team can be better but not operating on it's own, but rather through partnership with the data organization. I think that's a big opportunity and a trend that we're going to continue to see.
Chris Hymes (32:15):
How about you, Alex?
Alex (32:16):
Yeah, I think I agree. I think we saw this evolution with application security becoming closer to cloud security, and a lot of the leverage that you have in a cloud platform being on and ensuring proper configuration there on our operational prod side. And so probably more of that into the data space as companies learn to leverage their data in different ways and continue to move from older techniques of data exploitation into the modern realm. I think security is going to need to keep up with them, so that's definitely an area. I guess this isn't going to be like the same problems we've had before. Social engineering, unintended consequences of great new technologies that enhance sharing or enhance velocity, those are great, but there is always security tail end risks to think of.
Chris Hymes (33:05):
Yeah, new problem, same as the old problem.
Alex (33:09):
[inaudible 00:33:08], yep.
Chris Hymes (33:10):
Cool, well, I for one, I enjoyed the chat and conversation. I hope you all as well and I just wanted to thank you for jumping on this panel.
Omer (33:20):
Thanks.
Alex (33:20):
Thank you.
Chris Wheeler (33:20):
Thanks, [inaudible 00:33:22]-
Chris Hymes (33:20):
Bye, [inaudible 00:33:22].