
The Year Everything Changes: Data Governance, AI, and Litigation Risk in 2026
Host: Fahad Diwan, Director of Product Marketing, Exterro
Co-Host: Jenny Hamilton, Chief Legal Officer, Exterro
Co-Host: Justin Tolman, Subject Matter Expert, Exterro
2026 will mark a turning point for organizations managing sensitive data across legal, privacy, security, and forensics. With AI creating new categories of discoverable data, regulators increasing demands for documentation, and courts penalizing manual, inconsistent processes, the old playbook is no longer defensible.
In this webcast, Fahad Diwan, Jenny Hamilton, and Justin Tolman deliver a fast, insightful breakdown of 2025’s biggest trends—and their predictions for what comes next. Expect clear guidance on how to modernize preservation, rationalize data sources, prepare for AI governance, and build a defensible, automated strategy that stands up to both regulators and opposing counsel.
Apple Podcasts | Spotify | YouTube
Fahad Diwan (00:08)
Welcome everyone to today's Data Xposure Podcast. We're discussing the year everything changes, data governance, AI, and litigation risk in 2026. I'm Fahad Diwan, and while I'm a co-host of this podcast, in this episode, I'll be playing more of a role of a moderator, teasing out the golden nuggets of wisdom from my other co-hosts, Jenny and Justin. Last year, if you've been watching the headlines, you already know.
the way organizations collect, preserve, and govern data is shifting faster than ever. AI is creating new categories of discoverable information, regulators are escalating enforcement, and courts are losing patience with manual or inconsistent processes.
Today we're going to cut through the noise. I'm joined by two of my favorite people to talk about data risk and Data Xposure, our fearless legal officer in chief, Jenny Hamilton, and from our digital forensics team, Justin Tolman. Today we want to look back at the trends that defined 2025 and make some predictions about what legal privacy and compliance teams need to prepare for in 2026. So let's jump in.
Jenny, starting with you, before we dive into the trends, I really want to pick your brain on this. What surprised you most about how organizations handled data risk in 2025?
Jenny Hamilton (01:37)
What surprised me the most was how fast legal compliance adopted AI. They got out of the stands and onto the playing field with the business, not only to help develop governance programs around it, put the guardrails in place, educate the population on what are the risks and how do they outweigh or counterbalance the
the value of adopting AI. But also to actually look at it from a business perspective of, this technology specifically does a lot of things that we have to do as well to support the business, which is how to draft, how to message.
Fahad Diwan (02:06)
value.
Jenny Hamilton (02:26)
how to identify risks and pressure test different strategies. And this is useful for us too. So we were on both sides of that coin very early on. I haven't seen legal compliance adopt something so quickly for its own use that they're also managing the risk of for the business.
Fahad Diwan (02:47)
That's really insightful, Jenny. I've heard a lot of talk about how to govern AI and the risks about AI, but I think it's not as much talked about as how legal and risk professionals are using AI to actually mitigate risk arising from AI. That was very insightful. What about you, Justin? What surprised you most about how organizations handle data risk in 2025?
Jenny Hamilton (03:05)
you
Justin Tolman (03:12)
It's been a boiling point coming for a long time, but it's interesting because I found this year that a lot more organizations are focusing not just on their own in-house data, but what our third parties that they partner with doing with their data as well. We've seen a lot of breaches in 2025 and in late 2024, of course, where big companies
get put in the news for information being breached or leaked out to the internet and all this. And then you read about it and wait, wasn't technically that company. It was one of their third party providers. It was a software that they used or sometimes even hardware that they use that had software in it that was breached. so I've seen a shift in the thinking in 2025 of, okay, we don't just need to secure our own.
definitely very important. We're not going to let off the gas on that. But we also need to look at, ⁓ I'm using software X, or Y, or whatever. And I need to make sure that they are also having good data posture, good data governance. How are they protecting my information that they're using to provide my services? And that's been pretty, I think it's always been on the back of people's mind. But it's starting to come now to the front where
Fahad Diwan (04:16)
Mm-hmm.
Justin Tolman (04:35)
It needs to be a focus.
Fahad Diwan (04:37)
It definitely needs to be a focus. Many software companies from my understanding are using the major LLM providers, whether it's OpenAI or Meta, to build their own new software products. And so more more software companies are interdependent on other companies to process parts of their data pipeline. And so to protect and manage data risk, one necessarily has to...
protect and manage the data risk of the third parties with whom they work with. Now, 2025 was the year AI went mainstream inside the enterprise. Jenny, what are the biggest data governance and litigation risks organizations are now facing with AI-generated content, logs, and prompts?
Jenny Hamilton (05:23)
far the biggest risk are the amount of transcripts that are being generated of meetings. These AI note takers are now becoming a default. People really enjoy the note taking functionality, the ability to summarize and push tasks up to the forefront to be able to track and follow up their work and make sure their communications or meetings more productive.
The downside of that is if this becomes the default, this is the biggest volume of data. You can imagine thinking about every meeting being recorded and the volume of that, like the recording, the transcripts, the number of emails being circulated. also, know, AI makes mistakes. And sometimes they're pretty funny ones where it
Fahad Diwan (06:16)
Ha
Jenny Hamilton (06:17)
It
changes the words or misunderstands and misinterprets what's being said. I've seen it myself. It's like, this is the one thing I wouldn't want it to say that totally manipulated the meaning and attention. people are actually going in and reviewing all of the documentations being created and fixing them. When you're a litigator, this happens in depositions.
Fahad Diwan (06:41)
Hmm.
Jenny Hamilton (06:45)
becomes such a powerful source of evidence. People believe what's written over what people will say, even on the witness stand under oath. And so when you do depositions, the transcripts have to be reviewed by the lawyers and you have to correct them so that it doesn't, and you know, it's not creating incorrect evidence, inaccurate evidence. So this is something I think of with my litigators hat on.
Fahad Diwan (06:54)
Mm-hmm.
Jenny Hamilton (07:11)
as a huge risk to the organization that, you know, maybe it's outweighed by the benefit of the tech and how people are able to feel more effective in their job.
Fahad Diwan (07:21)
is a great point. I wonder about that too. Back when I was in law school, one of my professors told me, said, Fahad, don't write anything in an email that you don't want to have read aloud in a deposition or in a courtroom. But now, everything, every interaction, at least government that happens through the digital sphere, which more and more interactions are happening through that sphere because of remote work, is being recorded and transcribed. now it's almost as if don't say anything in a meeting.
that you don't wanna have rattle out in a courtroom. it's a bit, it creates a lot of risk for the organization as well as for the individual. Now switching gears a bit, Justin, throwing it to you, are teams actually aware of which AI systems are producing data that must be preserved or would you say this is still an invisible risk?
Justin Tolman (08:12)
I think organizations are aware of the options and aware of the services that are official. And what I mean by that is many organizations, let me back up a little bit. technology has obviously over the last, our lifetimes has exploded and it's just every year there's something new and there's new tech coming out. But one of the interesting things about AI was it was so approachable, right?
Jenny talked about people using it to create transcripts and depositions and, and, to do lists. And it's super easy. And so that that kind of patterns people into complacency. And also, it's affordable to use outside of official things you think about some of the legal software that we use, or our professionals use at work, they're not going to have that on their home computer. So using
and spreading that information out is a little bit harder. It's not impossible, but it's harder. Whereas it's not expensive for an employee to have their own chat GPT or Gemini Pro or whatever and put company data into that. so what trend I've seen with organizations trying to be aware while trying to also enable their professionals
Fahad Diwan (09:24)
Mmm.
Justin Tolman (09:32)
to be able to do their job faster, more efficient, maybe even more creatively is by providing these services to them at work, whether it's Gemini, whether it's an official chat GPT source, so that in hopes people don't use their own solutions and bring that data into a more manageable situation for the organization to prevent those data leaks. And I think that's what we're seeing. mean, most organizations now are trying to provide some sort of AI service. And so they are aware of those.
Fahad Diwan (09:40)
Mm-hmm.
Justin Tolman (10:01)
And they're just trying to get a hold of that shadow AI, as we could call it, relevant to shadow IT to try to curb that data leakage.
Fahad Diwan (10:10)
I love that point, Justin. It reminds me of something I always tell my son, right? When he's acting out, I say, I can't let you do that. You know, I named the limit. And then I say, here's a reasonable alternative that will allow you to let it act out, right? So you can't hit, but if you're feeling angry, you can punch a pillow. And so I think it is more effective to say, to one's employees, say, look, you can't just send all of your company data to chat, GPT or whatever.
But now we're going to provide you with an appropriate work AI sanction app like Gemini or a work sponsor, chat, GPT account that we can, that employees can leverage because the genie's out of the bottle, right? These tools that provide everyone with so much value. It's almost, it would be foolish for companies to say, don't use it. The right approach to being more in line with how human beings now work and function is to provide them.
that tool in a way that is safe and secure. so, but it does have this question of, you know, more and more data sharing and just more and more data creation also leads to more and more just data sprawl, right? Now we have more and more applications that we're using, work sanction or not. And Justin, we keep hearing that mobile chat and cloud data are overwhelming investigations. What would you say is driving this?
explosion.
Justin Tolman (11:34)
So I like I like your term explosion. kind of liken it to the to the Big Bang. Like the explosion happened a long time ago. We've been using mobile phones for work for a long time. We've been using cloud computing for a long time, but we're just now finally detecting it right. It's taken all these years to finally just really decide it's a priority. And I think a lot of what is happening, unfortunately, is being. It is.
breaches is attacks is the scaling rate of attacks. AI is fueling that even more where it doesn't it still takes a skilled person, but it's a lot easier now to generate breaches from inside and from out. And so the explosion in tech and this data actually happened a long time ago, especially with the decreasing prices of storage. Storage is insanely cheap now. And so it's easier to hold on to it. But now all of a sudden that information is
Fahad Diwan (12:24)
Mm-hmm.
Justin Tolman (12:28)
be the more data you have, the more risk you have, and the more risk you have, the higher the cost when it goes out. And so it was it is an explosion, but it happened a long time ago. And now we're just finally tracking it and deciding to do something about it. You know, so
Fahad Diwan (12:43)
Interesting. Right, so from your perspective, this proliferation of data already happened and we're kind of just coming into terms with it and understanding what to do with it. Would you say organizations are equipped to handle the growth of these new data sources or would you say they're reaching more of a breaking point?
Justin Tolman (13:07)
In my interviews with people, it's about 50 50. Some organizations have focused on it and a lot of it has to do with are they in high risk environments, medical, financial that are constantly seen litigation in some way or shape or form. but others are kicking the can down the road because like you said, it's, it's a lot of data. And so they look at this monument, this mountain of
information. And I know we talk a lot about, and we'll talk, I think a little bit more later on defensible deletion. But where do you start? How do you start? You know, and so they just like, well, you know, maybe we just won't deal with it. And there's a quote, and I'll, I won't get it exactly right. But it's, it's an engineering quote, that one of the stupidest things you can do is make efficient that which shouldn't exist in the first place. And
Fahad Diwan (13:45)
Yeah.
you
Right.
Justin Tolman (14:03)
And I feel like that's what we're doing a lot as an as a whole industry, all the tech, legal, whatever with our data is it's like we're doing a really good job or trying to do a really good job at protecting that which we shouldn't have, you know, like, no, get rid of it, you know, so anyway.
Fahad Diwan (14:19)
Right. There's no point in making something unnecessary more efficient, or there's no point in protecting data that you don't need if you could just dispose of it and essentially make the risk from that data zero.
Jenny, I'm sure one thing that made you a great litigator and I know makes you a great general counsel now is that you have this knack for being able to see around corners. And so I want you to put that hat on and help us make some predictions for the year ahead. If you look ahead to 2026, do you believe courts will start treating manual preservation as inherently risky?
Jenny Hamilton (14:59)
think they already do. I think they have for some time put out the warning signals through their opinions when they see manual preservation. And of course, if they're looking at it means something went wrong. Like the manual preservation did not, you know, did not comply with what they believe are the rules or at least your opposing party brought up, you know, that there's a weakness in your preservation process.
And manual preservation is tough just when you're facing new sources of data, growing volumes, you know, to really explore how to pull the data. It takes time. You have to talk to people. You have to understand it. Not all these sources are as accessible, easy to use as Justin mentioned about AI. So
I think this has always been a concern for practitioners in this area. I think that there's just this balance of what's practical versus what's overly time consuming to Justin's last point about trying to create efficient processes, things that don't deserve it. anytime there's a new tech like AI, there's definitely going to be
the industry doubling down on understanding, you know, where's this data stored? Where's copilot putting, you know, meeting transcripts and prompts and output? Is it being stored like other custodial data or is it being stored in an archive somewhere that's less accessible and how you pull it out? But the reality is a lot of data preservation is manual.
outside of the tools like we offer for legal hold and instituting preservation, preservation in place. When you're actually looking at things outside of the norm, it always starts off as manual and it always feels very risky and it is.
Fahad Diwan (17:05)
And so then to move away from this manual process, what parts of preservation and collection do you think should be automated first to have the biggest impact?
Jenny Hamilton (17:16)
My favorite. So this is something that AI triggered me a while back. The idea that we could use it to draft legal hold notices, not just help with the mechanics of preservation, but to draft legal hold notices based on the claims and the complaint and answer that's parallel to those court filings. So you're covering the right scope.
that you don't have to sit down and kind of really re-engineer the whole thing every single time to identify custodians in scope that may not pop up on your radar when you're thinking through this case by case. And also usually have hopefully a team of people, a paralegal, an attorney working together. So nothing drops through the cracks. I love the idea about
using AI and agents to go out and do scans and identify files, know, and prompt the case team like, hey, is this something that's relevant? Is this something you care about? Why do you care about it? What issue does it check the box of? then, and then to follow up, like there's so much manual follow up.
today in preservation because it is an ongoing continuous substantive legal duty. And you have to switch gears at some point and start thinking about other parts of your case. And you got to keep going back and following up with your custodians. And then if the case drags on, what's come online? What sources, new sources of data should you be preserving and collecting from? And then the case.
Fahad Diwan (19:02)
Hmm.
Jenny Hamilton (19:06)
Themes change, the complaints get amended, new claims are added, and you have to start all over again. So there's so much potential for use in AI in making this more of a not such a stressful event and not such a manual event.
Fahad Diwan (19:22)
Yeah.
I would love that for it to be less stressful and less manual. Now, keeping on with the predictions, Jenny, do you think AI governance will become as common and as expected as something like very commonplace in privacy and compliance now, like let's say data mapping? And what do you think good governance looks like from your perspective? Good AI governance.
Jenny Hamilton (19:27)
Hahaha
So data mapping to me is like that. It's like the bell bottom jeans, you know, like it just, it keeps coming back. It became fashionable in the early 2000s when all of a sudden the evidence shifted from paper to primarily electronic. And it was like, okay, now we have a duty to know what the sources are, how to preserve and how to, super, super fashionable.
Fahad Diwan (19:59)
Yeah.
Jenny Hamilton (20:17)
but not always practical. so eventually like, you know, people try to find other ways to accomplish the same thing in a way that's an easier left. And when it comes to AI governance, you know, so it's different. It's a different animal from data mapping, but I do think that we probably already have in 2025, eclipsed the ubiquitous nature of data mapping as a tool.
Fahad Diwan (20:19)
Hmm.
Jenny Hamilton (20:47)
that AI governance has been, I think, more popular and more approachable than data mapping exercises. It affects every organization and not just the privacy team or the discovery team, but everybody has a general understanding about the importance of AI governance beyond those niche areas. So I think it's taken on
its own life as a priority in every organization. I how small you are, how big you are. I listen to my colleagues at bigger companies talk about it. And it's just one of those things that's now become table stakes. This is what good governance is at its core. And we've learned a lot from compliance, privacy compliance, and e-discovery compliance in terms of what good governance is going to look like.
Fahad Diwan (21:22)
Mm.
Jenny Hamilton (21:41)
⁓ when it comes to AI and it's the same principles, you'll, you'll recognize them Fahad in the privacy world, which is, you know, it's about transparency. It's about, call it visibility. It's about accountability. You know, do we know what AI tools we're using as an org? Do we know what, what the boundaries are to employees understand what the right boundaries are for compliant use?
And what is the business going to do when it comes to accountability when there is a crossing of the boundary? So we've drawn from all of these different areas of practice and compliance to come to a place of pretty foundational understanding about how to build an AI governance program and what governance, good governance is going to look like.
Fahad Diwan (22:33)
Right. It's almost as if we're building on the shoulders of giants, right? We've been doing the work and developing best practices for governance and those best practices can be applied to AI governance. And I particularly loved your point about how AI governance will become table stakes. And I agree with it wholeheartedly. Bezos, that's something that really, that I found was very insightful. you saying that in the future,
AI will be kind of like electricity is today. It'll be embedded everywhere. And I think that is true. I think that it's going to provide so much value to so many people in so many different ways that it literally will be like electricity. It'll be everywhere. And so we have to use it responsibly and judiciously. Now, Justin, I haven't forgotten about you. know, forensics, forensics, luminary.
Jenny Hamilton (23:04)
Hmm.
Fahad Diwan (23:29)
Now, one thing that people are saying are forensics and IT are going to merge or continue to merge. Do you agree? How do you see the relationship between digital forensics and IT evolving over the next year?
Justin Tolman (23:29)
you
That kind of depends on the size of your company because in smaller companies that is the case that they've already merged. have always been the same. One of the things that we see coming together, no matter what the company size is though, is the need for defensible collection. And forensics has been that, that's whole existence. But I think as forensic practices and tenants make their way more into mainstream or
Everyday IT departments not putting them down just I don't know what else word to use there. But you're seeing a need for the ability to collect endpoint data, user data, that sort of thing from computers rapidly for long term preservation, whether they're on a legal hold, whether they've just left, did they want to they have a policy to hold it in case there's, you know, anything that goes down afterwards, or whether they were dismissed.
Fahad Diwan (24:12)
Mm-hmm.
Justin Tolman (24:35)
and need to collect that data, there's always this need to defensively collect and preserve forensic data, which is a little bit different than other areas of the company, because it usually typically goes deeper, has a bit more data and whatnot. But that's where you're seeing the main integration is in collection and preservation analysis is still
Pretty much separate fields. IT really doesn't have much need to do a lot of digging deep. If they need to, they'll hire a forensic service provider. But collection and preservation from the various sources, whether it's an endpoint, whether it's a cloud device, a mobile device, those types of things where they need to retain that for future litigation. Litigation? That's where you're going to see the machine at all levels of the industry.
Fahad Diwan (25:22)
Fascinating, Well, now the next prediction is one that makes me very happy. And of course, I wholeheartedly agree with it. Anyone who knows me or has attended any of my talks in the past couple of years will know I'm all about data deletion. And so one prediction that we have is that data deletion will become more central to an organization's risk management strategy. Now, it is challenging, but...
to delete data. so Jenny, my question to you is two two part. First, do you agree with that prediction? Is 2026 really the year of data deletion or is it just something that I tell myself to help me sleep at night? And what makes data defensible deletion so hard for large enterprises and what needs to change?
Justin Tolman (26:00)
Thanks
Jenny Hamilton (26:01)
You
I hope it's the year in 2026 that companies prioritize the need to delete redundant, obsolete, and trivial data or data rot and make a run at it, at least from a programmatic point of view. organizations have been testing the waters with different manual processes to delete data and
and particularly in the heavily regulated industries where there are some cybersecurity rules that require a data deletion program. Maybe that's not widely understood, at least taking those steps to making it more of a programmatic operational process. But the reason it's so hard to do that unless you absolutely have to do it,
like you have a regulator breathing down your neck, is that it's still a very manual process. And it requires, for many words, someone to sign off. So everybody agrees we need to delete data, but nobody wants to push the button. companies need a way, they're looking for a way to remove that element of fear.
so that it can proceed. In my book, that's gonna require clear rules, automation of the rules, and scalability, right? And AI to me is the vehicle that's gonna get us there, but only after the humans can define the rules. That's always the tricky part.
Fahad Diwan (27:49)
fascinating. So AI can get us there, but humans need to define the rules. You know, one thing you would often say to me in our meetings, Jenny, is that if we can just surface the risky, the risky unnecessary data to people in organizations, they'll take action because it'll make the invisible visible. But surfacing those insights is already hard. requires a lot of software solutions and integrations, which of course we've automated here at Exterro I think
AI will make it even easier, right? And with AI and with the advent of agents, companies will be able to simply ask, well, what unnecessary data do I have? And once that becomes visible, it'll be hard to ignore, right? Because you'll see, you'll have that risk just staring at you in the face. So I love your points here, and I love that point that you would often see in meetings about making the invisible visible. Now I want to ask both you and Justin one last question.
for our listeners. If you're advising, you know, general counsel, chief privacy officer, or the head of forensics, what is the single highest value move they should make before 2026? Jenny first with you and then Justin.
Jenny Hamilton (29:07)
So if we've ever talked about this before about how to prioritize work in a legal function, I'll talk to you about there are three columns. There's a must column, there's a should column. If we're going to be a world class legal department, then we should do this. And then there's the good column. We could do this, which tends to be the more strategic work and more high value work. But in this case, for the one thing, ⁓
Organizations, the single highest value move, I think they've got to look at what AI does well and how to leverage it. And that's going to start not in the strategic column necessarily, but in the must do column. What are the must do things of the legal compliance function in any organization? And I'm going to tell you what my answer is, it's preservation. So if that's the case, we've already talked about how
you can use AI to increase the compliance, the automation of it, and scale it. And the other reason that I focus on preservation is that this is something that every org must do to some extent, and it's a huge burden.
to any organization if you have to do it routinely in terms of time, money, risk, and it's almost impossible to outsource. So, you there's a lot of things you can look at and you can, you could do it or you could outsource it. You should do it, but you could also outsource it. This is something that is like the single, if you have a litigation portfolio, the single greatest thing you can do to enable, empower your,
employees in that department to move to more high value work and reduce cost burden and risk. Everybody wins.
Fahad Diwan (31:02)
Everybody wins. love that. Justin, what about you?
Justin Tolman (31:05)
Everyone I talked to, kill the data rot. And it's similar to forest fires or brush fires. The longer it's been between fires, the worse that next fire is going to be because there's a bunch of dead junk piled up that's going to burn like crazy. And it's the same with your data at organizations, especially I agree with Jenny got to as organizations embrace AI to speed up workflows.
Fahad Diwan (31:17)
Hmm.
Yeah.
Justin Tolman (31:31)
that simultaneously generates new data and you don't want to be piling that data on old dead data. You want to be able to be using clean data, the data you need, the data that's fresh, the data that helps with the business day to day. And if it's not that don't have it. And this is also cost saving. You know, we talked about that fear of, of getting rid of it.
but you think you have to hire an investigative team or an illegal team to come in and figure out what went wrong and you've got petabytes of petabytes of petabytes of years of data, that's a big bill because you're charging hourly rates for some group of people to go through it. The less data, the less you have to protect, the less you have to investigate, et cetera. So kill the data rod.
Fahad Diwan (32:06)
Mmm.
data rot, is more. I love that Justin. Great insights as always Jenny and Justin. Thank you both. Thanks so much for our listeners. We hope we gave you some great insights about the key trends that happened in 2025 and how to prepare for 2026. If there's one theme that came through clearly in our discussion today, it's this. 2026 will reward organizations that embrace automation, strengthen governance,
and build tighter partnerships across legal, privacy, forensics, and IT. I'm Fahad Diwan. I'm my co-hosts are Jenny Hamilton and Justin Tolman. Thanks everyone so much for spending time with us and we'll catch you at our next Data Xposure episode.