September 2, 2025 | Data Exposure
Inside a Mega Data Breach: The Cost of Mismanaged Information Governance
Host: Justin Tolman
Guest: John Wilson, CISO, HaystackID
In the past year, several massive data breaches—impacting tens of millions of people—have exposed a troubling pattern: when legal, security, and forensics teams aren’t aligned, the fallout is faster, costlier, and harder to contain. In this episode of Data Xposure, host Justin Tolman talks with John Wilson, CISO at HaystackID and a veteran in digital forensics and incident response, to unpack how cross-functional silos can derail breach response—and what that means for organizations managing sensitive data in high-stakes environments.
In this episode, you’ll learn:
- Why traditional forensic processes struggle in large-scale breaches—and how to fix them.
- The real-world consequences of siloed legal, security, and privacy responses.
- How to build forensic playbooks that support fast, compliant breach response across functions.
If your legal and security teams only align during a crisis, this episode might be your warning shot.
Thanks for tuning in to the episode 2 of Data Xposure. Don’t forget to subscribe so you never miss an update. For show notes, resources, and to connect with us, visit exterro.com/data-xposure
Subscribe on Your Preferred Podcast Platform
Episode Transcript
Justin Tolman (00:00):
What if the biggest problem for IT and security teams isn't a lack of cyber defense, but is the amount of effort that those teams put in to protecting the data that shouldn't exist. In this episode, we'll explore how your forgotten data becomes your attacker's greatest weapon. I'm Justin Tolman, the forensic subject matter expert and forensic host of data exposure, a podcast for data risk leaders brought to you by Exterro. Today we dive into how effective information governance will mitigate your risk and also reduce the weight of a forensic investigation if there is a problem.
Joining me is John Wilson, the Chief Information Security Officer and president of forensics at Haystack ID. You're kind of one of the OGs of forensics. Been around for a bit, started in the mid-nineties till now. Outside of the technology, because obviously software has changed, technology has moved forward from the mid-nineties. What are the things that you've seen in forensics that have really changed, and I would include incident response in that DFIR. What are some of the main trends and things that you've seen over your career?
John Wilson (01:06):
Yeah, well, I mean, so obviously when I first started there was no tooling and you do, I don't want to get too much into the technology advancements, but I mean it was hex editing when I first started. It was going in looking at data on disk and hex editing and oh, hey, there's the pattern I'm looking for, this is the data that's of interest. Let me see what I can do with that.
Working your way forward, obviously tools have evolved to do these things much easier than just looking at raw data, but I don't know the first 15 years it was computers. It was just computers. You looked at a computer or you looked at a hard dis, and that was where all the forensics was done. So many things don't live on a hard disc anymore. It's in memory on a mobile device or it's in the cloud that nobody has access to, and so that's really key change to how you have to start approaching.
(02:11):
All of this is just the growth of where data can sit.
I mean, I've actually worked a forensic case for the government where we found the data in a microwave, so bizarre stuff, but data lives everywhere and I mean people have smart fridges look at their fridge and their fridge is scanning their refrigerator constantly and telling 'em, Hey, you're almost out of milk. Hey, you need more eggs. Those sorts of things.
All that generates data and what people don't really realize is that information can be leveraged. So now these threat actors, if they're getting access to your IOT devices and they're understanding when you're walking into your house, when you're leaving your house, they can catch you right at that right moment when you don't have time to follow all the normal processes because you're in a hurry. They know you normally leave at four, but you're still there, you're running late, so they're going to ping you right then.
(03:09):
That's amazing information for threat actors to leverage and take action against you, and they're taking that information and then they're taking all this filed information, all of the social media, which is another area that just has really blossomed and exploded and getting leveraged in not just threat actors. We talk about threat actors a lot because obviously we do a lot of incident response and breach activity, but even just in a litigation case, all of this information can become very important. If the wrong people are understanding that information or getting access to that information when they shouldn't, they can create a real problem.
And so you really do have to understand that data is no longer, Hey, I've got a hard drive, but I need your computer for forensics. It's their phone, it's their social media profiles. It's all the data they store on the cloud because people store everything on the cloud.
(04:04):
Now, whether they realize it or not, if you're using Teams to do things, everything's getting stored out in M 365 and SharePoint, unless you're one of the 10 organizations that still have, I mean there are some, I joke about 10, but there's still organizations that have on-prem, but it's that proliferation of places that you have to look and then you also have to talk about.
When I did my first forensics, it was an 80 megabyte hard drive. Put that into perspective today, and you can buy an Apple MacBook Pro with I think eight terabytes or 16 terabyte hard drives in them. Now you can buy network arrays and RAID arrays that store almost into the petabytes all by themselves. That data has just proliferated, Hey, you're not looking at just a hard drive anymore. It's all these other things and all these other things store data for different periods of time like mobile devices.
A lot of the information is very transient. The more you chat, the more that goes away, the more you do whatever actions, the more that's getting overwritten and the prior histories are getting lost. All of these things have a lot of transient data and then the absolute sheer proliferation. Those are the two big changes in the forensics world for me, and what I see is just I have to look so many places now and I have to look at so much more data. Now
Justin Tolman (05:38):
That puts a lot of work on the investigator, but I also think it puts a lot of pressure on these companies that are housing your data because it's not just to your thing. We're not OnPrem. We're not just storing a file here or there. It's our entire lives being uploaded to third party providers, service providers, like your cell phone provider, all that sort of stuff. They're housing pretty intense amounts of data. Now,
John Wilson (06:08):
The intense is almost not even enough of a word for it. It's insane. We actually had a case this year where I met with a client and they have a very serious litigation situation going on that involved some bad actors and they generate a petabyte of data a day. I mean, how do you even get your arms around that? How would you disaster recover from that?
And they're like, we can't. It's just grown to where we just can’t. We know we can't know that we're just going to lose 30 days of data if anything happens, and that's built into our risk profile. A petabyte today is just unfathomable, but literally the organization is generating a petabyte of data a day and they're like, well, so now you've got this preservation request. I can't preserve a petabyte a day for you in perpetuity. And not only that, what are you going to do with it?
(07:10):
How are you even going to look at a petabyte a day? Let's talk about that. That's such a frightening number, such a frightening thing. And then you talk about, well, what's in that petabyte? And it's like, well, we've got PCI information there. We have transactions occurring, we have financial information in there from our customers, and all of this protected information that has, you may have HIPAA concerns, you may have PII and PHI, and I'm using a lot of acronyms and I dunno if anybody listening is going to know all of these things, but it's all protected information.
That's the P and all of those pretty much is almost all protected, so it's just frightening and I don't know where we're going to land. The data has just continued to accelerate. Now you talk about adding in AI generation of information or processing of information. How much information's getting into the AI engine, how much getting out of the AI engine and how do you understand what your risks are there? The challenges are just magnificent.
Justin Tolman (08:19):
It's insane how much data, and we will bounce around on that idea I think throughout the rest of this, but that amount of data has kind of facilitated this evolution of threat actors as well. You talked about on a different call we were on, it used to be script kitties or just people working in their basement, their mom's basement. Who knows, right? It's not that way anymore. You mentioned it's almost these venture-backed organizations that are sophisticated threat actors. Can you share some more information on the evolution of that because it matches the of how much data we're storing?
John Wilson (08:57):
Yeah, no, it absolutely does, and you start talking about four or five years ago, ransomware was really having its moment and generating a lot of revenue for these threat actors, and a lot of them were just little small groups or the groups of script kitties or whatever, people out just buying, Hey, here's a threat package you can buy and then go deploy it wherever you can get it into. And so that all kind of happened and all of a sudden it started getting to be meaningful money, and so now all of a sudden you have these threat actors that are like, Hey, I don't really need to go out and do the threats anymore. All I'm going to do is just go out and say, Hey, buy my toolkit, and you go out and push it out wherever you can get it, and then we'll pay you 40 cents on a dollar for all the data that you get or for each access that you get or however they decide they want to calculate it.
(09:49):
There's lots of different formulas that they use, but so now it's truly venture funded. You've got these group of threat actors that made a bunch of money, and so they're out there just buying more access. Hey, anybody that can get me in, we'll pay you a bounty and then we'll go do our thing. And it has just really, really blossomed and exploded, and then you got to mix in. Now you have nation state actors that are getting involved. They've been involved, but they're starting to get involved in these larger data swaths because realized, hey, if we can go get a huge data swath from a couple of organizations, start correlating that data, we really get where we wanted to be without having to exert as much effort. And then you've got these threat groups that have been successful and they're making money and they're providing venture funding, and then they have these master groups within them that are almost like super, super cyber gangs.
(10:45):
They're huge. They spend more working on figuring out exploits against your network than your entire company's revenue, so how on earth are you going to stop them? The machine has just gotten to that level and that scale and the realization of the value of not the one bit of data, but the conglomeration of, Hey, I can correlate this one bit that I got from company A and this bit that I got from company B and this bit from company C to actually provide real intelligence that I can then really turn into a valuable package for an attack or to gain access to company D that I could never get into because I've compromised three other people and got into the vendor ecosystems to get into that organization, and it's a much more sophisticated machine in 2025 than I think most people have. Any guess or idea of--
Justin Tolman (11:49):
That synergy of data kind of makes the same quantity is a quality of its own. Do you talk about nation states? They don't, I mean it helps if they hack government targets, but they don't need to. If they want to know what the people are doing, buying where they're at, their movements, you can go after the people, not the hardened targets.
John Wilson (12:09):
Well, yeah, exactly right. And even in the governance instance, so the nation states, they've figured out it may not be easy to go find an exploit directly at company A, B, or C or government organization A, B, or C, but if I target the employees that are supporting that, I can start watching their patterns. I can start getting that intelligence and they're starting to build a very smart profile of, hey, whenever they log into system X, they have to do MFA, we know that the MFA is Microsoft, and then they move on and they have to go through this gateway. So now they know they're starting to learn a lot more and they can, alright, now I'm going to target, I'm going to look for weaknesses in MFA, I'm going to look in weaknesses in the gateway and see if I can put together a ride along package to go straight through the entire attack vector that they now have very much learned all the specifics of, whereas in the past they kind of had to, Hey, I'm going to attack. Okay, I found a firewall, now I've got to figure out the firewall. Now I can get to the next step and I can get to the next step and I have to keep digging the layers, peeling the youngin of the organization itself. Instead, now they're getting a lot of intelligence about, Hey, I know that the third, fifth, seventh, ninth, 10th, 11th, 12th layers are this. All I've got to do is solve for layers one, four and seven or one, four and six to have a successful attack,
Justin Tolman (13:53):
Which really leads to, we had this breach recently in the news, AT&T got hit, lost a bunch of records, but the way you're describing it, the economics of a breach almost seem catastrophically in favor of the threat actor. How do organizations compensate for that or at least mitigate those types of risks?
John Wilson (14:19):
Yeah, I mean, so you do have to be very conscious of how you implement your data retentions within your organization. Data rot is real. People don't like to talk about it, but if data's just sitting on your network, that's a very, very big risk. And if there's not a viable business reason to have the data or there's not a regulatory requirement to have the data, you shouldn't have it.
People talk a lot about social security numbers, so social security numbers is a great one because you look at credit cards and the PCI standards and how you protect credit card information, so there's a lot of serialization and you don't store the whole credit card number, you make a salt or a hash of it, so you store certain bits and pieces and it correlates to other bits and pieces in order to be able to process those transactions without actually storing a naked raw credit card number, which is great, but nobody's done that for social security number, for health record identifiers, for employee identifiers and an organization and all of those things can have the same damage, especially when you start talking about how large the data pool has gotten as this data gets larger and they can start correlating.
(15:38):
If they can figure out your social on platform, your social security number on platform A and go get your date of birth and birth city from platform B because you're posting it on social media all the time, I don't recommend, but
Justin Tolman (15:54):
We--
John Wilson (15:55):
Won't delve there. They're able to start saying, okay, now I've got the social, I got the date of birth, I've got the birth city and I've got the maiden name of the mother, and next thing you know, they've got your identity and they're performing transactions or worse, they set up a secondary identity using your information in so many different aspects. Social security numbers is a really big one.
Look at the AT&T breach. You started talking about it, and that was 110 million social security numbers I think it was, or 109 million social security numbers, but it wasn't just social security numbers, it was social security numbers, date of births and addresses and prior addresses and phone numbers and all of this compilation of information. And so that's a very rich resource.
Again, that's really the AT&T Snowflake breach really started in 2022, rolled into earlier this year, more data was released from that original breach. That's a single source, but now with the leveraging of AI and just data-driven exercises, as these organizations have gotten much more organized and larger and sophisticated, they have people that are doing data management, data mining, and now they're going to look at, Hey, we got four breaches from these companies.
Where do we have intersections? Where do we have additional data so we can start understanding a bigger picture about the target, the victims that are in that dataset.
Justin Tolman (17:30):
You mentioned that data ROT, so you very graphically described the risk in the problem. I'm an organization, I've gone to this point and I look at my data and I'm listening to John and I'm like, oh crap, I've got a lot of stuff. What's kind of the first step? If you were going in to work with a company, you'd probably do that. What would be your recommendation? How does somebody get started in fixing or minimizing that data rot?
John Wilson (18:01):
Yeah, so I mean you've got to get understand your data. That's probably the biggest thing, and so many organizations think they understand their data, and every time I get involved in a consulting engagement, they rarely actually do. They're like, yeah, well, we collect this, this and this here. And I'm like, well, let's do some data mapping. Let's go out and look at where all your data is stored and what kind of data is in those buckets.
And hey, by the way, did you realize that you're storing last four social over here and full socials over there, and so now somebody, if they get access to your data, can start correlating all the rest of the information that you're storing with just the last four because they're able to find the actual socials over here. So you really do have to understand your data, understand where your PII is your protected information, personally identifiable information, your personal health information, your various categorizations of protected information.
(19:02):
You got to understand where it's all stored. Then you have to understand your business itself. What are your requirements to have that information? If you're a hospital, your requirements are probably seven years to maintain that information. If just a company, and it's your HR that's storing PHI, because you offer health insurance, your requirement to store their health information, the things that they've submitted through the organization may not be as long. So really understanding what your requirements are and what your obligations are so that you can then do that data mapping exercise and say, Hey, I have data here that's been there for 10 years.
I mean, we just did a consulting with a client and found financial records from 2002--
(19:51):
in their data, and it's like the employees been gone for 12 years. Why do you still have this here? I don't know. That's a problem. That's serious risk, and that's a serious problem. There's no reason to have it.
Get the policies in place, make sure that you're meeting your regulatory requirements and your protective requirements and your business requirements, and then put in the policies and then start the operation. You can start with the super sensitive, okay, can we get rid of social security numbers? Can we anonymize or pseudonymize the social security numbers, put in a serialization pattern so that we understand who the people are and it ties back to the information, but then that information is stored in a separately controlled and encrypted space.
Then I can start cleaning up a lot of that information, start eliminating a lot of that risk. And that's really what's key when you're talking about data rod, is understanding your data and getting rid of the data that you don't need. Then, I mean, I can't tell you how many organizations we go into, and I'm sure it's the same for you. They have email forever and ever and ever and ever and ever. People are just making archives if their email systems actually making them archive it. A lot of systems nowadays, storage has become so cheap, people just don't pay attention to it is, yeah. Okay, so his mailbox went from 10 gigs to 20 gigs. Who really cares? That costs us another penny. Okay, move on.
(21:21):
The actual cost is really in the risk. It's in having that data, and so it's really getting into that understanding the data, mapping the data and getting rid of the rock.
Justin Tolman (21:31):
Yeah, because the less there is for them to take, the less there is to lose and the less risk you're incurring when that data does walk out.
John Wilson (21:42):
Yeah, absolutely. And I constantly preach the cost of having the data isn't really that big of a deal anymore. It really isn't. It's the risk. It's the cost associated to that risk because it's requiring you to do so much more else. And info gov has always been, Hey, yeah, if we can do a little bit of info gov, we'll do it, but there's no ROI. We don't get any return on investment. I beg to differ. I always explain to clients and I'll walk them through it. There's a huge ROI to info gov, but you have to realize what the risk value is because it's the risk value that really is the big return on investment in info gov that allows you to really limit that exposure.
Justin Tolman (22:38):
IBM released their state of the breach or their study for 2024. I haven't seen the 2025 one yet if they've released it, but in 2024, to your point, the average cost of a breach worldwide, including fines, paid port and remediation to victims, and then what it took to get them back up and running was $4.8 million was the average cost of a breach, but in the United States it was about $9.6 million. So to your point, that data, that housing of the data, the gigabytes are probably a lot less than $10 million, but if that data leaves suddenly its value just went sky high.
John Wilson (23:27):
Absolutely, absolutely. And when you talk about US breaches, when a breach occurs for an organization, the 20, again, going back to 2024 estimates is 2020 fives aren't necessarily solidified yet. The value of an individual identity, a person or an entity is what we typically call it is around $190 each is that cost of the organization. If it's breached, each individual costs around $190. If it contains any medical information, that goes up to about $500.
So there's a great example in and of itself, but people aren't even realizing that's just the cost of the actual incident itself and the risk exposure that has occurred because of it. You also have to look at, now I've got deal with that breach event. I got to do breach response and notify all those people and the goodwill, the faith lost in the business.
There's plenty of examples of stocks dropping because an organization has a big breach event. They get a big fine from the FTC or the FCC or the SEC and all the three letter agencies that are around the world, and you also start dealing with privacy, and so now the cost of the breach itself is that one 90 as a for instance, but now you have to deal with privacy notifications and privacy breach, and it's a whole set of additional requirements that are not in those calculations. Historically.
(25:08):
I'm hoping that we start seeing more of that where people are taking into account not just the cost of a breach itself and dealing with the breach itself, but dealing with the privacy risks and the privacy exposure that comes from those as well, and there is no number around that yet.
Justin Tolman (25:29):
So we've been kind of talking about these big businesses. Do you find that with the kind of growth of the threat actor empire, they work as corporations of themselves? Are they primarily targeting big businesses or are they targeting anything they feel that they can monetize data from? Should everybody be worried in how much?
John Wilson (25:53):
Yeah, I mean there's always, Hey, I'm looking for the big opportunity, the big score, but there are tons of these groups that, like I said, it's almost venture funded, so it's going out to all of these small groups and the five guys that had success last month, they were able to prove success. So now the venture fund group says, Hey, here's $2 million.
Go see who else you can get us. So there's lots of targets in the small, the midsize business, especially any business that has information that may be valuable or leverageable or provide another piece of the puzzle to the big company's data that they're looking for, and the whole challenge, not challenge the risk is now that they're getting much more sophisticated, these threat actors are going out and they're using AI and saying, Hey, I want to breach big company A, how do I do it?
(26:55):
Give me a profile. And that big company, their data's all getting analyzed, and again, it's all this data that's been breached. They're going out and they're saying, oh, okay. Hey, well, these small companies are third party vendors to that company, so if you attack these vendors, that may be your end to those. So there is a lot of attack vector starting to occur in that small to mid-size market because there's a lot of intelligence being put into, Hey, how do we optimize and improve our attack abilities? And they're learning a lot and they're figuring out that these attack vectors aren't always just banging at the front door that or just banging at the software backdoor. Sometimes it's coming in from the third party avenues and coming in from lots of different directions.
Justin Tolman (27:48):
I believe the North Carolina pipeline was hacked through an IOT device that was provided by a third party, managed by a third party and provided that entry into their networks. So not only information governance, but also vendor onboarding is
John Wilson (28:06):
Yeah, absolutely critical vendor onboarding, understanding who you're getting involved with and doing a third party screening. Do they at least meet a minimum security profile for your organization? Do they have protections in place? Because this goes back a few years. One of my cases that I had to work was I had to go out and do an FDIC bank seizure where they were closing a bank, the bank was failing, and the FDIC went out to attack it, and we get out and we're starting to explore and figure out what problems they have at the bank, and we got the third party pen tests, Hey, yep. Nothing was able to get through the firewall. Great. Looking through all that, I start actually data mapping the network and figuring out that the reason that the pen test was unsuccessful in breaking through the firewall is the firewall was the endpoint of the network,
Justin Tolman (29:13):
So no--
John Wilson (29:14):
Go. Everything was sitting in front of it, so of course they didn't break through. They passed the pen test because there was nowhere to go. They were already through everything, so that turned into a bit of a problem.
Justin Tolman (29:30):
Yeah, it's a bold move to use all your resources as the honeypot, I guess
John Wilson (29:36):
It it's, it is cold. That didn't work out well for them. They lost like $8,000 in an ATM fraud.
Justin Tolman (29:44):
But no, it's those types of things where I feel like, because you talked about this rapid evolution, you started, obviously we haven't just had threats in the last five years, but you've talked about how it's sped up in the last five. I feel like when I say small businesses too, I'm not talking solo, solo operated things. It can be couples of millions of dollars and you're still considered small, and I feel like there's this mentality that these threat actors are only going to target the big guys.
Like we mentioned the AT&T or rewind eons ago at the Sony hack where terabytes and terabytes of data were exfil, but as you pointed out, small guys can pay too, and I think the risk to smaller companies is even more because they may not be able to recover from the damage caused by a breach. They don't have that resiliency or that customer base that can't go anywhere or won't go anywhere or that sort of thing.
John Wilson (30:44):
Yeah, no, a hundred percent mid-size businesses, in my mind, the small to mid-size businesses are at much bigger risk because the larger organizations have funds set aside if something happens, they have insurance that's going to cover those things. The small business doesn't have an unlimited cyber insurance policy and their two to 5 million in business and getting hit with a $10 million lawsuit came over, and it really is that risk profile, that risk balance is much larger on those smaller to mid-size businesses, and the threat actors have really started figuring out that it's those smaller to mid-size businesses that don't have the breadth and depth of protections that the larger organizations, and so they've become a much more viable source.
Again, a lot of them are third parties to the big companies, so if they can breach the small guy you look at, there's been a bunch of breaches in the news where small software vendor was leveraged in the big attack because they got in through the small software vendor, were able to embed malware or a payload of some nature in that software package that then gets delivered into the big company, and that's how they finally broke into the big company. And so that risk is so large in balance to what they can actually support or sustain or defend against,
Justin Tolman (32:21):
Which even more important for them, that information governance you talked about, don't over retain that data because if you can't afford all that other stuff, but it should be something you can afford not to hold onto something. Right? I don't need that.
John Wilson (32:34):
That's exactly right. That's exactly right. It doesn't cost hundreds of thousands, millions to go out and figure this out, especially in a small to mid-size organization with small to mid-size data resources. So go figure it out, get it mapped, understand what your data is, get your policies in place. The bill is much more tolerable and in line with the business itself than having the event occur in almost every circumstance and almost every client that I had a case with or I've gone in and consulted with almost every single instance, it's much cheaper, much more affordable and sustainable for the business of all size. Whatever size you are, the scale is there when you're talking about how do I protect myself from the risks versus how do I defend myself once it's happened?
Justin Tolman (33:29):
Absolutely. It's things where an example might be for these small to mid-size, you might need to use social security to authenticate a person, but then once you've authenticated that person, just don't retain it unless you need it for your day to day, but rarely are you scanning their social Yeah,
John Wilson (33:48):
Exactly. It's get rid of it or it's anonymize it or pseudonymized it. There's ways to protect yourself in those scenarios.
Justin Tolman (33:58):
Yeah. So at the end of the day, we've talked about threat actors are getting more sophisticated. The economies of the breach are often not in your favor, but to mitigate that, you need to manage your information governance. If you had to encapsulate that into a word of advice to a CISO, to an IT department, what would be your key takeaway for them to mitigate those risks? Maybe something they're not doing right now,
John Wilson (34:32):
And it sounds ubiquitous based on our conversation, but still the data ROT. That is the secret to life in this world is kill the data ROT. If you get rid of the stuff that you don't need, that you don't have a requirement to have, or you at least anonymize it or pseudonymized it so that it's not usable in an attack vector, kill the data rot. That's my lead in with every call that we have. Kill the data rot, understand your data, map it out, figure out where your PHI is and let's get rid of it.
Justin Tolman (35:09):
That's proved out in nature. Forest fires often are the worst when there hasn't been one for a long time because of all the dead undergrowth and fallen detritus that lights up and makes it so much worse. Same with a business.
John Wilson (35:24):
Yeah, it's exactly the same. Controlled burns, clear the perimeter, get rid of the debris, then the burn if you do happen to have a burn is manageable. If you just let everything go wild, it's all sitting there. The burn's not manageable.
Justin Tolman (35:41):
Yep. Well, John, I appreciate you taking this time to sit down with me. Anything you've got going, any other presentations you've got coming up you want to plug for any of our viewers? I know you're going to be at our user conference.
John Wilson (35:56):
Yeah, so I'm going to be at your user conference. Also have a webcast next month on DeepFakes, and that's something that we didn't get too far into here today. DeepFakes or synthetic media are really another scary attack vector in this world where people are getting attacked because they're getting convinced that something's real because they saw it in a video or they interacted with it in a video, and it may not be real. Obviously, I'm passionate about it. I want to see businesses protect themselves, get rid of that data rot, and be able to eliminate or not necessarily eliminate, because again, the economics are not there in their favor, but at least limit or reduce their risk.
Justin Tolman (36:44):
Absolutely. Yeah. Make some controlled burns. So
John Wilson (36:49):
That's it.
Justin Tolman (36:51):
Well, thank you.
John Wilson (36:52):
Thank you. Have a great day.