AI Metrics: Proving Value and Ensuring Resilient Implementation

In a recent episode of the Govern by Design podcast, I was joined by Angela Jenkins, SVP of Operational Risk Management at the American National Bank of Texas, a Resultant client. Angela offered valuable insights into managing generative artificial intelligence (AI) from an operational risk perspective.

From a Govern by Design perspective, when identifying and evaluating AI metrics, it’s important to consider who within your organization is asking for them. AI performance measurement typically focuses on efficiencies with funding or investments and usually leads to outcomes.

Questions from the Board

These questions are typically centered around ROI and efficiency.

  • How much have we spent on AI?
  • What has been the return on that investment and the return on value?
  • What impacts or outcomes have we achieved that would not have been accomplished without AI?
  • What types of profiles, customers, or stakeholders have we impacted the most in a way that would not have been possible without AI?
  • What is the speed of operational efficiency that we have gained?
  • How have our people gained skills along the way?

Questions from the Project Management Perspective

This is a different set of questions that are tied to key performance indicators (KPIs).

  • How long does it take to deploy new AI from a standard point of view, and at which point do you see diminishing returns?
  • When is a project considered too long?
  • At which point do we consider that we are wasting too much time, effort, and funding before it just becomes a drag and is no longer worth pursuing?
  • Do we have those metrics of streamlining for efficiency and effectiveness even at the project management level?

More Questions from the Board

The board will also want to know about compliance, ethics, and accountability.

  • Can you demonstrate that we are using AI safely?
  • Can you demonstrate that we are fully compliant?
  • As we use and roll out more AI use cases, can you tell us exactly where AI resides throughout the enterprise at any given point?
  • Where do we have all of our AI deployed—do our people know about it or pay attention to it?
  • Are we able to prove that AI is fully transparent and not biased in any way?
  • To what extent can we prove that AI is not biased?
  • Lastly, has AI provided us with a competitive advantage, in the marketplace and internally, beyond the obvious benefits of speed acceleration and streamlining certain activities?

The bottom line must relate to what we have acquired in the function, in the effectiveness and efficiency, in the outcomes and impact, and in the revenue as a result of implementing and using AI. From metrics to resiliency, bottom-line impact paired with robust risk management ensures that AI is properly implemented for sustained success.

For a fresh perspective on risk management in the world of AI, I encourage you to watch my full interview with Angela Jenkins of American National Bank of Texas on the Govern by Design podcast.

Watch Now

Share:

Connect

Find out how our team can help you achieve great outcomes.

Insights delivered to your inbox