The Conference Board has officially published the results from its latest report, which reveals that around three-quarters of the S&P 500, or 72% of companies, now flag AI as a material risk in their public disclosures, marking a staggering rise from just 12% recorded in 2023.
Going by the available, over the course of this survey, reputational emerged as the most prevalent risk archetype, with around 38% companies feeling the effects of it. Beyond that, cybersecurity risks took the second spot, as they were reported by over 20% of all surveyed companies.
More on the latter would reveal how companies, at large, went on to indicate the growing footprint of AI-related security risks.
“We’re seeing a clear theme emerging across disclosures: Companies are worried about AI’s impact on reputation, security, and compliance. The task for business leaders is to integrate AI into governance with the same rigor as finance and operations, while communicating clearly to maintain stakeholder confidence,” said Andrew Jones, author of the report and Principal Researcher at The Conference Board.
Talk about the whole study on a slightly deeper level, we referred to how the reputational risks make up the most popular risk category, but what we haven’t touched upon so far is that they are, on many occasions, caused by the failure of AI projects in delivering promised outcomes. The same risk can even worsen if these AI projects are poorly integrated, or are perceived as ineffective. In totality, 45 of all surveyed companies reported this as their primary reason.
A contingent of 42 companies also claimed that missteps like errors, inappropriate responses, or service breakdowns can become highly damaging, particularly for consumer-oriented brands.
Another 24 companies would go on to deem mishandling of sensitive information as a reputational hazard. Their concern largely any breach here can very well escalate into regulatory action or public backlash.
“Reputational risk is proving to be the most immediate and visible threat from AI adoption. One lapse—an unsafe output, a biased decision, or a failed rollout—can spread rapidly, driving customer backlash, investor skepticism, and regulatory scrutiny in ways that traditional failures rarely do,” said Brian Campbell, Leader of The Conference Board Governance & Sustainability Center.
Next up, we must expand upon cybersecurity risks. Here, a substantial chunk of companies would reveal that AI widens their risk surface using new data flows, tools, and systems. You see, around 40 companies described AI as a force multiplier due to its ability to bolster scale, sophistication, and unpredictability of cyberattacks.
A separate group of 18 companies blamed overt reliance on cloud providers, SaaS platforms, and external partners.
Almost like a ripple effect, 17 companies revealed that data breaches remain among their biggest concerns, showcasing how AI-driven attacks can expose sensitive customer and business data.
The Conference Board’s report also discovered a strong presence ot legal regulatory risks. For better understanding, a total of 41 companies cited difficulty in planning AI deployments amid fragmented and shifting rules. Alongside that, 12 of the all surveyed companies revealed a concern that new AI-specific rules will bring heightened compliance obligations and potential enforcement actions.
In case the situation wasn’t bad enough, court filings across the board continue to highlight uncertainty over how courts will treat IP claims tied to AI training data or who bears liability when autonomous AI systems cause harm.
Among other things, the study in question sheds light upon a set of emerging risks. An example relaying the same talks to how over 24 companies highlighted risks spanning copyright disputes, trade-secret theft, and contested use of third-party data for model training.
A separate 13 companies-lineup deemed as privacy their biggest concern, particularly focusing on sensitive exposure under the General Data Protection Regulation, Health Insurance Portability and Accountability Act, and California privacy laws (CCPA/CPRA).
Rounding up highlights would be the risk attached with technology’s adoption. Basically, around 8 companies put their finger on risks in execution. This covers costs of new platforms, uncertain scalability, and the possibility of under-delivering on promised returns.