Data Exposure Podcast

California’s New Data Rules: Automated Decisions, Risk Assessments, and Cybersecurity Audits

Learn about California's new data rules in this episode of Data Xposure, with host Fahad Diwan and Dr. Stephen Fusco, Data Privacy Officer and Senior Counsel at Danone North America.

California’s New Data Rules: Automated Decisions, Risk Assessments, and Cybersecurity Audits

Host: Fahad Diwan

Guest: Stephen Fusco, Data Privacy Officer & Senior Counsel at Danone North America

California has just finalized sweeping new privacy regulations—and they cut straight into the daily operations of legal, privacy, and security teams. From automated decision-making tools that now require notice, opt-outs, and appeal rights, to mandatory risk assessments and independent cybersecurity audits, these rules are reshaping how organizations must govern data and technology.

In this episode of Data Xposure – The Podcast for Data Risk Leaders, host Fahad Diwan sits down with Stephen F. Fusco, Data Privacy Officer & Senior Counsel at Danone North America, to unpack what these rules really mean in practice. They explore where businesses are most likely to underestimate their impact, how to stand up “right-sized” compliance programs without stalling innovation, and the concrete steps teams should take this quarter to stay ahead of California’s mandate.

For leaders navigating the intersection of value and vulnerability in data, this conversation offers both urgent clarity and practical next moves.

Subscribe on Your Preferred Podcast Platform

Apple Podcasts | Spotify | YouTube

Episode Transcript

Fahad Diwan (00:04):

Welcome to Data Exposure, our podcast for data risk leaders like you. Each podcast session, we explore how to effectively navigate the line between extracting value from your data and mitigating data risks effectively. Today we're going to be talking about how to navigate AI risk effectively. I'm Fahad Diwan and I'm joined by Dr. Stephen Fusco, data protection officer and senior counsel at Danon. Let's get into it. So can I call you Stephen? Should I call you Dr. Fusco? What do you prefer?

Stephen Fusco (00:34):

Oh, Stephen is just fine.

Fahad Diwan (00:37):

So AI means a lot of different things to a lot of different people. In your context, within the context of your role, how do you define AI?

Stephen Fusco (00:45):

It really depends on what you're talking about. And by the way, just so you know, I'm speaking on my behalf today, not on behalf of Danone, but

Fahad Diwan (00:52):

Classic lawyer.

Stephen Fusco (00:53):

Exactly. So it really depends on what you're talking about. So a lot of people, I think what's most important is to ask the vendor or who you're working with, what do you mean by ai? Because a lot of folks will tell you they're using AI, and it's doing things like making predictions and algorithms and things like that when it's really just aggregating data and putting out reports for you. And so I think it's important to understand what exactly is the technology.

And I think that's one of the challenges that people face because they don't necessarily understand when someone says, oh, this is an AI tool. They don't necessarily understand, well, what exactly is it doing? It's kind of this black box. They don't understand.

So I think it's important for people to understand that particular issue before they even begin to look at what the risks are and identify them. Because a lot of people, you could be talking about large language models or you can be talking about things that are much more sophisticated, that are algorithmic and are making predictions for you, or we're getting to the place that AI will be cognitive in some ways and sort of be its own being. And so you need to understand, well, what exactly do we mean by AI? Because the risks profile looks different in which when you think about it, it starts to look very different for a

Fahad Diwan (02:05):

Company. And so knowing that there are these different definitions that are more sophisticated AI, where do you encounter it the most in your role? What does it look like and what are the most common risks?

Stephen Fusco (02:19):

Yeah, probably the two most common right now are just the sort of aggregating of data and producing results for use. So I think of that as Chat GPT, Gemini, things like that. So they are really just taking large amounts of data, aggregating them, giving you some outputs, summarizing, simplifying, maybe doing a little bit of predicting for you. So that's one area I often see it.

And then the second area I do is the predictive models. And you can see those across the industry. You can see it in the employment sector with helping both with recruiting and making employment decisions. You can see it with vendors, you can see it with vendors in terms of sales numbers, things like that. You can see it in terms of marketing. So some of it is predictive and then some of it is just more sort of that big, taking all of the information and aggregating it together.

Fahad Diwan (03:13):

And so with respect to predictive ones, is there an added level of risk, for example, if they're making automated decisions based on the predictions?

Stephen Fusco (03:23):

Oh yeah, absolutely. So in certain states you're seeing laws come into effect to that related to that. But also I think it's important for you to know that out of the box, even if there weren't laws on the books, you're going to want to understand what that technology is doing so that you can do a risk analysis. So let's just use an example that you're using a sales vendor who is going to help you make predictive modeling in terms of pricing, tons of antitrust implications involved in that.

Because if you think about first of all, what data are they using to feed in to train their model? If they're using your competitor data to do that, that could raise some antitrust implications because you're relying on confidential information that a competitor has, and it's helping you make predictions in terms of pricing to maybe give you a market advantage. If you're looking at making decisions in employment context, you have to worry about discrimination. And so with that then comes, okay, well what are the guardrails?

(04:29):

And I think for me, one of the biggest challenges is a lot of these AI vendors are unwilling to disclose their technology. So they're not willing to give you sort of the way the sauce is made. And so you can't understand effectively, well, what's going into their algorithm to make that decision? And then what risks do I have to mitigate against?

And what I'm seeing both globally, you see with the GDPR in the US is there's a heightened concern about the ethics of AI. And so if you can't test the algorithm to see is the algorithm being discriminatory, is the algorithm doing these things that many people would say, oh, that sort of skirts the line of ethics, then you're in a difficult position. And so you really, really need to work closely with that vendor to make sure you understand how that technology works and to really keep pushing them to provide that information. Because if you don't understand the technology and you just buy it and use it, you're never going to understand the risks.

Fahad Diwan (05:32):

That's a very insightful point. I think being able to have a transparent view of how the model works is both important for the company in managing its own risk, but also many, there are requirements now requiring companies to be able to disclose to individuals how the AI model works. So let's ground it in reality. Let's say you have a challenging vendor that's not providing sufficient transparency and you know that you need that not only for your organization's sake, but also to meet your organization's obligations to others. What practical tips could you offer our listeners to get the information they need to meet their obligations?

Stephen Fusco (06:10):

Yeah, that's a great question. So one thing I would always suggest is that you always have multiple vendors available at your disposal. So when you're sourcing a vendor, don't just look at one vendor but look at multiple vendors so that if you do find there is a particular vendor that's being difficult and not giving you what you need, you have several other vendors you can go to who might be more cooperative and might be willing to share more.

Second is think about your NDAs. Think about if there's a way that they can disclose that information to you and that you can guarantee the confidentiality of it, they can open that black box up for you. The third is that fortunately, and some may debate this with me in some states, they're now requiring that the vendors provide really, yeah. So California has some legislation that was being considered related to actually disclosing and requiring the disclosure of it. Because in Europe under the GDPR, there is a transparency requirement, and the US not so much, but some states are choosing to enforce a transparency requirement with AI technology. So that's another practical point.

And then the third is the joy of my job is that I just provide a risk assessment to an organization. And there's always a risk benefit analysis. You may never know exactly what that technology does at its core, which you have to ask yourself is do I know enough that I can mitigate risk enough to move forward with that technology or do I feel I just don't know enough about it that I can adequately mitigate risk enough for the organization to move forward with it?

And depending on the size of the organization, if you're a young startup and you don't have the FTE and the headcount to be able to do those things and you need the technology, you might say, I'm willing to take on more risk because of the financial burdens that we're facing. Whereas if you're a larger company, you may say, no, that risk is too big for us. We don't want to take on a risk like that until we know more about the technology.

Fahad Diwan (08:17):

Fascinating. So it's more of a holistic picture you take about the business and it's operating environment when you assess risk, and are there some risks that really get your flags up if the technology is doing certain things and you're like, Hey, this I need to be even more careful about, I need a greater level of transparency because these elements exist?

Stephen Fusco (08:38):

Yeah, I think there's two things. One is what is the data that's being fed into the model?

Am I talking about confidential information? Am I talking about sensitive health information, personally identifiable information? Then I'm going to have a very different way of approaching it because I'm going to want to know a lot about what they're doing with it, because ultimately I'm the controller of that information and I'm passing it on to somebody else to process on my behalf.

I think the second thing for me that I always want to think about and raises a red flag for me is that if they're not willing to tell me anything about their technology, and I know they're going to be using that information to train their model, that raises a flag for me. So you should always be asking, are they using the information to train their model? And some might simply say, yep, we're going to use it. We have the right to use it.

(09:28):

We're going to train our model with it. Then you need to be asking yourself, you're basically paying a vendor for a service and they're benefiting from the use of your data to improve their own service. And that's a problem that companies, and you need to think about that. Again, the return on investment may be so big that you say, I'm cool with that, or it might be just to be for a company to say, I'm not willing to give you that information for that purpose. So those are probably the two red flags that I often think about.

Fahad Diwan (09:56):

Insightful. Two key red flags. And you've touched upon this a few times already, but what are some of the regulations that people like you, data risk leaders should keep in mind when building an AI governance program or a program to manage AI risk?

Stephen Fusco (10:14):

That depends a lot on the size of your company. So I'll give you two different answers. If you are a company that has a footprint outside the US, it's going to look very different, especially if you're governed by the GDPR because the GDPR is very far ahead in terms of the regulations. And for folks who might not be familiar with it, it's just a set of data privacy regulations that apply in the EU. And so there's a lot of transparency, ethics, and other types of requirements. So if you know that you're going to be processing data that exists in Europe, you're going to need to look at it very differently.

In the United States, there are some states, so Colorado, we're in Colorado right now, example has some AI regulations on the books. California has some AI regulations, some other states have some related to employment issues. So you need to understand, are there specific states that have AI regulations that require you to have a heightened level of conducting a data impact assessment? So if it involves making substantive decisions that could adversely affect an individual's life, you might have to do a higher level data impact assessment than you would otherwise.

Fahad Diwan (11:29):

And so in addition to data impact assessments, are there other requirements that are across the board in these different US-based AI regulations?

Stephen Fusco (11:38):

Yeah, so some states require that you do a independent analysis of discriminatory effects in employment context. It really is still a very young area. So I would say not a lot that we know. There are some specifics in the weeds that you should just look at each state by state and see if it applies.

Fahad Diwan (11:58):

Okay. Okay, great. Thank you, Stephen. And so people are listening to this podcast and we know that the California has enacted new rules to govern automated decision-making technologies, which is a specific type of AI that uses predictive models to make decisions about customers, vendors, individuals, so on and so forth. And to the best of my understanding, this may come into effect October 1st, but that's just one regulation. You've touched upon many others.

What are some two to three high impact things that data risk professionals can do from now in the next 90 days to help them mitigate or address some of the major requirements or mitigate some of the major risks or build some of the most impactful people, processes and technologies, let's say?

Stephen Fusco (12:48):

Yeah, that's a great question. So I'm probably going to start foundationally, which doesn't get you to necessarily the 90 day point that you're talking about, but you need to understand, as a data privacy officer or someone in charge of data privacy within an organization, you need to have an inventory of your vendors and who's using AI. So as a threshold item, you need to understand, okay, how are we using AI to make decisions on behalf of the company? So if you don't even have an inventory of your vendors and who's using AI, you can't even go on to the step of then assessing whether the California regulations that are coming into effect are going to apply.

The second thing folks need to do is they need to actually look at the specific requirements and the specific areas that are covered and reach out to their various departments.Reach out to your HR department and say, are we using a vendor that uses AI technology to screen applicants and make decisions? If so, that's going to raise a flag for you. Are we using AI to help us make decisions about compensation? Well, that might raise a red flag for us as well. So I think you just need to go through those specific requirements and then reach out to your partners and ask them the specific question.

Because I think where a lot of data professionals sometimes get stuck is they sort of have in their mind, well, I've done this inventory and folks have told me X, Y, and Z, and that's good enough. I always reach out to my cross-functional partners because I will invariably learn during one of those question sessions. They're doing something that either through a change of leadership or a change within the organization, somebody new was brought on and they change the way things are done. But you always learn something new and you should always verify those things. So I would say that at a minimum, you should go through and see the specific requirements and make sure, ask your partners, are we doing those things within the organization using technology that does

Fahad Diwan (14:47):

That? Right? And like you said, where are we using AO? Let's identify that, learn how we're using it so we can go from there. And I love the point about cross-functional stakeholders and working closely with them. My understanding is we can't take a siloed approach to managing AI risk. AI is being used across the organization, and we need to work in an integrated way across our different stakeholders and build people, processes and technologies that are integrated as well.

Thank you so much, Stephen. That was a great, you had so many wonderful insights. I hope it was as engaging and insightful for our audience as it was for me. I'm sure it was. And thank you so much.

To our listeners, managing AI risk required a comprehensive integrated strategy.Exterro is the only platform on the market that provides an integrated and comprehensive solution for privacy, security, and legal professionals.

If you enjoyed this podcast, please subscribe and share with your network. Again, I'm Fahad Diwan. Thank you so much for listening, and thank you so much, Stephen, for your time today.