AuditBoard, the AI-powered global platform for connected risk transforming audit, risk, and compliance, has officially published the findings from its latest study called From blueprint to reality: Execute effective AI governance in a volatile landscape, which informs the readers on how global risk teams are navigating the growing prevalence of artificial intelligence (AI) technology.
Going by the available details, this particular study treads up a long distance to showcase that, even though many companies have drafted policies, few have embedded AI governance into their organizations’ operational fabric.
To understand the significance of such an effort, we must take into account how, thanks to a steep rise in AI’s popularity, modern systems are now greatly exposed to its underlying risks. This gap would go on to orchestrate a reality marred with executional uncertainty, cultural fragmentation, and misaligned ownership.
“This report validates the critical need for a more integrated, operational approach to AI risk,” said Michael Rasmussen, CEO of GRC Report. “AuditBoard’s expertise in aligning audit, risk, and compliance functions makes them well-equipped to provide the framework and tools necessary for companies to move from policy creation to impactful AI governance.”
Shedding light upon the given reality, AuditBoard’s study would take into account the opinion of more than 400 GRC and audit professionals across the United States, Canada, Germany, and the United Kingdom.
As for the results, they begin by revealing that having overconfidence in your security is highly likely to take the form of a risk. This translates to how an estimated 92 percent of respondents said they are confident regarding their visibility into third-party AI use, but at the same time, no more than two-thirds of organizations report conducting formal, AI-specific risk assessments for third-party models or vendors.Â
Such a gap leaves roughly one in three firms relying on external AI systems without a clear understanding of the risks they may pose.Â
Next up, we must expand upon a piece of detail claiming that current policies to govern AI are falling way short of fulfilling their responsibility. You see, while 86 percent of respondents said their organization is aware of upcoming AI regulations, a meager 25 percent of respondents said they have a fully implemented AI governance program.Â
In simple terms, many do have policies in place or in development, but having said so, few have moved on from generic documentation to disciplined execution.
Another detail worth a mention is rooted in barriers to AI being more cultural than technical. Here, respondents identified the leading obstacles to AI governance as lack of clear ownership (44 percent), insufficient internal expertise (39 percent), and resource constraints (34 percent). Beyond that, fewer than 15% said the main problem was a lack of tools.Â
AuditBoard also took this opportunity to reveal a lineup of top Chief Information Security Officers who are leading in the context of AI governance and security practices. The stated list includes Comcast’s Noopur Davis, RiskImmune’s Dr. Magda Chelly, Amazon’s CJ Moses, and more.
Founded in 2014, AuditBoard’s rise up the ranks stems from empowering customers to transform their audit, risk, and compliance management. The company’s excellence in what it does can also be understood once you consider it is trusted by more than 50% of the Fortune 500 companies.Â
On top of that, it is also top-rated by customers on G2, Capterra, and Gartner Peer Insights, and was recently ranked, for the sixth year in a row, as one of the fastest-growing technology companies in North America by Deloitte.
“AI governance today is a test of execution, not awareness,” said Rich Marcus, Chief Information Security Officer at AuditBoard. “This report confirms that the most persistent AI governance challenges are clarity, ownership, and alignment. Organizations that treat governance as a core capability, not a compliance box-checking exercise, will be better positioned to manage risk, build trust, and respond to a rapidly evolving regulatory landscape.”