
Artificial intelligence is everywhere in business today—but so are the risks. In a recent Data Xposure episode, host Fahad Diwan spoke with Dr. Stephen Fusco, Data Protection Officer and Senior Counsel at Danone, to cut through the hype and focus on what data risk leaders actually need to know.

Here are five key lessons from their conversation:
“AI” can mean very different things: large language models like ChatGPT, predictive hiring tools, or algorithmic decision-making systems. Each has a distinct risk profile. Before assessing risks, you first need to clarify what kind of AI you’re dealing with—and push vendors to explain what their tools actually do.
Predictive models raise particularly thorny issues:
Without transparency, organizations can’t verify whether these systems are operating ethically or legally.
Vendors often resist disclosing how their models work—but you have options:
The bottom line: don’t settle for “trust us.”
You can’t govern what you don’t know. Start by mapping which vendors and departments are using AI today. Then, go beyond assumptions—ask HR, marketing, and operations leaders directly. You’ll often discover new uses of AI that weren’t on your radar.
AI governance isn’t just a legal or compliance issue—it touches HR, marketing, IT, and beyond. Emerging rules, like California’s automated decision-making regulations, require collaboration across functions. Strong programs combine:
As Fusco put it, risk leaders need to weigh both business needs and regulatory realities. Smaller startups may tolerate more uncertainty; larger enterprises often cannot. But across the board, one principle applies: AI risk management must be transparent, collaborative, and integrated.
Want to hear the whole conversation? Check out the podcast here.