.

Entrusting AI to Lead the Way on Your Governance Efforts

SAS, a leader in the AI and data space, has officially announced the launch of new trustworthy AI products that will improve AI governance and support model trust and transparency. As a part of the given update, SAS will bring to the fore model cards and new AI Governance Advisory services that, on their part, can be expected to help organizations navigate the turbulent AI landscape, mitigate risk, as well as pursue their individual AI goals more confidently. Talk about the whole value proposition on a slightly deeper, we begin from the model cards. These model cards, set to become available as a feature on the SAS® Viya® platform, will empower us to understand those new regulations sprouting up all over the AI landscape. Serving everyone from developers to even board directors, the stated model cards will deliver on their promise by highlighting indicators like accuracy, fairness, and model drift. Through the said indicators, SAS would provide us with comprehensive governance details, such as when the model was last modified, who contributed to it, and who is responsible for the model, thus allowing organizations to address abnormal model performance internally. Furthermore, the product’s model usage section also makes a point to address intended use, out-of-scope use cases, and limitations,

“SAS is taking a thoughtful approach to how it helps customers embrace AI, focusing on the practical realities and challenges of deploying AI in real industry settings,” said Eric Gao, Research Director at analyst firm IDC. “Model cards will be valuable for monitoring AI projects and promoting transparency.”

Next up, we must get into the new SAS’ AI Governance Advisory service, which will enable customers to use their data in ways that aren’t just productive but also safe. Enabling customers to configure AI governance as per the needs of their organization, the service in question has already been piloted with a select few customers. As for what the pilot revealed, it showed improved productivity from trusted and distributed decision making, along with enhanced trust from better accountability in data usage. On top of that, the pilot helped the companies achieve better competitive advantage and market agility, something it did by making them “forward compliant.” Owing to these upgrades, the stated advisory service was also successful in generating greater brand value for the business, while simultaneously giving it the ability to retain top talent who usually demand responsible innovation practices.

“Our AI governance conversations with SAS helped us consider potential unseen factors that could cause problems for customers and our business,” said Marek Wilczewski, Managing Director of Information, Data and Analytics Management (Chief Data Officer/Chief Analytics Officer) at PZU. “We better understand the importance of having more perspectives as we embark on AI projects.”

Rounding up highlights is SAS’ brand-new Trustworthy AI Life Cycle workflow. This workflow basically works alongside US National Institute of Standards and Technology’s (NIST) AI Risk Management Framework. In case you weren’t aware, NIST was introduced last year to help organizations design and manage trustworthy and responsible AI in the absence of official regulations. From a practical standpoint, the new workflow specifies individual roles and expectations to simplify adoption of this framework. Such a setup, in turn, can help us big time when the agenda is to gather required documentation, outline factors for consideration, and leverage automation to make the NIST framework more universal. Markedly enough, more on the prospect of documenting considerations related to AI systems’ impacts on human lives would reveal steps to ensure that the model is not causing disparate impact or harm to specific groups.

Hot Topics

Related Articles