Are you treating AI like a helpful assistant—or a ticking time bomb?
AI is making our lives easier, no question. However, when introduced into the workplace without clear oversight, it can pose serious risks to your business—from regulatory exposure to cybersecurity threats.
AI Risk Management Starts with Clear Policies and Communication
Drawbridge’s Simon Eyre and Louis Cordone recently spoke with Marshall Terry of Haidar Capital Management LLC about this very issue. Their core insight? Managing AI risk requires clear internal policies and open lines of communication throughout your business ecosystem, including with third-party vendors.
Without a structured approach, AI usage can quickly outpace your organization’s ability to control it.
Why AI Is on the Regulatory Radar in 2025
AI has caught the eye of regulators across industries. The SEC’s 2025 Examination Priorities explicitly identify artificial intelligence as an emerging technology and alternative data source that must be assessed and understood.
But what makes AI so complex from a compliance standpoint?
Similar to cybersecurity, AI risk is multi-dimensional. It impacts numerous areas of a business and involves external vendors, service providers, and technologies that are outside your direct control. Regulators are grappling with how to oversee the full lifecycle of AI—its data inputs, decision-making logic, and usage in real-time business operations.
To stay compliant, firms will need to develop and document custom AI policies and procedures aligned to their unique use cases. Being able to articulate your AI governance strategy clearly can help avoid regulatory deficiencies during an exam or audit.
Why Co-Sourcing Is Key to AI Governance
At Haidar Capital, Marshall Terry takes a co-sourcing approach to AI and cybersecurity governance. Unlike traditional outsourcing, co-sourcing enables internal teams to collaborate directly with trusted external partners. The result? Better visibility into risks and a stronger culture of shared responsibility.
Co-sourcing also supports education. As Marshall explains, building a “stable of experts”—a network of trusted advisors with deep expertise in AI risk and cybersecurity—can help firms:
- Strengthen internal knowledge
- Build AI literacy across the organization
- Make more informed decisions about AI usage and policy development
- Demonstrate accountability to regulators
AI Risk Management Must Be an Ongoing Effort
AI evolves faster than any other business tool today. So can your firm afford to review AI risks and policies only once a year?
The answer is no.
Quarterly AI policy reviews and ongoing training are quickly becoming best practices. As regulators continue to evolve their stance on AI, firms must adopt a dynamic approach to AI governance—one that adapts to new technologies, emerging threats, and regulatory updates.
AI governance isn’t a “set it and forget it” exercise.
As AI capabilities evolve, so too must your approach to managing them. Firms that build agility into their governance—through frequent policy reviews, cross-functional collaboration, and continuous education—will be best positioned to harness AI’s benefits without falling prey to its risks. In a landscape where regulators are watching, and technology is shifting by the minute, proactive AI risk management is no longer optional—it’s a core business responsibility.
How are you operationalizing the use of AI?