.

Embracing the Unstoppable: Navigating Generative AI in Application Security and GRC

By David Orban, Managing Advisor, Beyond Enterprises

Introduction

Generative Artificial Intelligence (GenAI) is taking the world of software by storm. ChatGPT, launched a year and a half ago, introduced hundreds of millions of people to its power. The companies developing Large Language Models, at the basis of the latest generation of AI applications and development tools, are achieving valuations in the tens of billions of dollars. The capabilities of the tools are increasing at an unprecedented speed, fueled by the triad of ever more powerful hardware, increasingly large amounts of data, and algorithms that grow smarter constantly. We have been accustomed to the exponential increase of Moore’s law, doubling the power of our computers and smartphones every couple of years. What we are seeing now is a jolting super exponential rate, where the doubling rate of the power of AI is itself shortening!

There are calls to slow down the pace of development, and regulations around powerful AI systems are being deployed by the US through an executive order, by the European Union through a law that has been recently approved, and even by China. Regardless of the additional compliance requirements that these regulations impose, GenAI is here to stay.

GenAI represents, as it often happens with new technologies, both a set of new challenges for application security, as well as an innovative way to address the challenges!

The Challenges of Generative AI

On one hand there are several reasons why the current generation of GenAI platforms can be a source of concern in application security.

The statistical nature of their output means that they cannot be exhaustively tested. The same interaction in a different moment can generate a different outcome. This lack of predictability and consistency in generative AI outputs raises concerns about the reliability and safety of applications that incorporate these models. From a GRC perspective, this creates new challenges in ensuring compliance, managing risk, and maintaining the integrity and security of AI-powered applications.

GenAI platforms lack an ability to recognize their own limitations, and will provide outputs that go beyond the boundaries of their applicability, what in the industry are called “hallucinations”. These hallucinations can range from minor inaccuracies to completely fabricated information, which can have serious consequences when generative AI is used in critical applications. From a GRC standpoint, this highlights the need for robust governance frameworks, stringent testing and validation processes, and clear guidelines for the use of generative AI in application development and deployment.

The architectures employed make it hard to impossible to explain how and why the systems arrived to generate a given output. It is challenging to trace the decision-making process of generative AI models and understand the factors that influence their outputs. The lack of explainability raises concerns about compliance, liability, and the ability to identify and mitigate potential security vulnerabilities.

In an even more directed manner, GenAI can be used by bad actors to exploit human-centric vulnerabilities through deep fakes, or social engineering, at an unprecedented scale and levels of sophistication.

Generative AI to the rescue!

On the other hand, GenAI can become a powerful tool to improve application security, and the entire GRC process.

Models can be trained to analyze application code, identify potential vulnerabilities, and suggest or ultimately automatically implement secure coding practices, significantly reducing the risk of security breaches.

GenAI can help organizations create comprehensive and dynamic threat models by analyzing vast amounts of data from various sources, identifying emerging threats, and predicting potential attack vectors, enabling proactive risk management

Tools can be deployed to continuously monitor applications and infrastructure for compliance with relevant regulations and standards, generating real-time reports and alerts, and reducing the burden of manual compliance checks, in areas that were not amenable to automation before, given the preponderance of natural language in the particular areas.

What can you do?

The action items for the responsible adoption and deployment of GenAI becomes clear.

Models specifically designed for security and compliance purposes, leveraging relevant data sources and domain expertise. This can’t imply the development and training of new models, which is beyond the reach of most corporations. But it can include the fine tuning of existing models, and certainly their careful prompting as they are integrated in existing or novel applications.

It is crucial to implement a collaborative environment, where AI experts, security professionals, and GRC teams can work together to develop and implement generative AI solutions that align with the organization’s specific needs and requirements. Just this past March 28, 2024, the White House issued a memorandum to the heads of all executive departments and agencies, calling for the appointment of Chief AI Officers.

The governance frameworks and guidelines for the ethical and responsible use of technology must be updated to include GenAI in security and compliance contexts, ensuring transparency, accountability, and human oversight. Given the rapid development of the field, it is essential to continuously monitor and assess the performance and effectiveness of the tools, making necessary adjustments and improvements based on feedback and evolving security and compliance landscapes.

This can only happen in a culture of innovation and experimentation! Successful adoption of advanced technologies in general, and GenAI in particular goes through encouraging the exploration and adoption, while prioritizing security and compliance considerations.

As you read this, you can be sure, today is the day that sees the least amount of AI in your organization and your applications. Day after day, it will only increase! And it is easy to check: you can run an anonymous survey, asking your teams if they use ChatGPT, or similar tools. I am sure that the results will be striking in documenting how pervasive GenAI is already.

Did I use GenAI in creating this article? Of course I did, but responsibly! What does that mean? Doing research, brainstorming, refining questions, topics, but always double checking, and making sure that I can stand behind each sentence. Working together with AI is part of our future, and part of responsible, secure applications and GRC processes, too.

Hot Topics

Related Articles