Skip to content

Privacy

5 Practical Lessons for Navigating AI Risk from the Data Xposure Podcast

October 1, 2025

Artificial intelligence is everywhere in business today—but so are the risks. In a recent Data Xposure episode, host Fahad Diwan spoke with Dr. Stephen Fusco, Data Protection Officer and Senior Counsel at Danone, to cut through the hype and focus on what data risk leaders actually need to know.

Data Xposure host Fahad Diwan speaks with guest Stephen Fusco.

Here are five key lessons from their conversation:

1. Don’t Treat AI as a Black Box

“AI” can mean very different things: large language models like ChatGPT, predictive hiring tools, or algorithmic decision-making systems. Each has a distinct risk profile. Before assessing risks, you first need to clarify what kind of AI you’re dealing with—and push vendors to explain what their tools actually do.

2. Watch Out for Predictive Pitfalls

Predictive models raise particularly thorny issues:

  • Antitrust risks when pricing models rely on competitor data.
  • Discrimination risks when algorithms influence hiring or compensation.

Without transparency, organizations can’t verify whether these systems are operating ethically or legally.

3. Push Vendors for Transparency

Vendors often resist disclosing how their models work—but you have options:

  • Evaluate multiple vendors so you’re not locked in.
  • Use NDAs to enable deeper disclosures.
  • Point to regulations—GDPR in Europe, or new state laws in the U.S.—that mandate transparency.

The bottom line: don’t settle for “trust us.”

4. Build Your AI Inventory

You can’t govern what you don’t know. Start by mapping which vendors and departments are using AI today. Then, go beyond assumptions—ask HR, marketing, and operations leaders directly. You’ll often discover new uses of AI that weren’t on your radar.

5. Treat AI Risk as a Cross-Functional Challenge

AI governance isn’t just a legal or compliance issue—it touches HR, marketing, IT, and beyond. Emerging rules, like California’s automated decision-making regulations, require collaboration across functions. Strong programs combine:

  • People who ask the hard questions.
  • Processes for impact assessments and bias testing.
  • Technology that enforces guardrails.

Final Takeaway

As Fusco put it, risk leaders need to weigh both business needs and regulatory realities. Smaller startups may tolerate more uncertainty; larger enterprises often cannot. But across the board, one principle applies: AI risk management must be transparent, collaborative, and integrated.

Want to hear the whole conversation? Check out the podcast here.

Sign Up for Alerts

Get notified when new content for specific topics is available.

Sign Up