How to make AI more effective and avoid bias

By Melanie Batley

The power and promise of artificial intelligence (AI) can only be fully unlocked when we gain greater trust, transparency, and control over the technology, algorithms, and data. According to a new report by KPMG, Controlling AI, this is why companies need to institute effective governance methods for AI, with a focus on responsibly unleashing the power of these technologies.

In our research, we found that most leaders aren’t clear on what an AI governance approach should be, and that existing governance constructs are poorly rated for their capabilities and readiness to support AI. We aim to work with industry leaders to solve for that challenge and enable greater confidence and adoption of AI at scale.

“We believe that in order for leaders to assume responsibility and accountability over the results of their AI, they will need to have confidence in the technology and a framework that facilitates transparency and explainability. This new confidence and transparency will drive greater adoption and scale of AI across the organization and industry overall,” said Martin Sokalski, Partner, Emerging Technology Risk Services, and co-author of the report.

“The cost of getting AI wrong extends beyond the financials such as lost revenue and fines from compliance failures, to reputational, brand, and ethical concerns, and customer trust,” Sokalski said.

KPMG’s original framework is guided by four key objectives and AI trust imperatives:  integrity, explainability, fairness, and resilience. 

The paper introduces KPMG’s new AI in Control solution, which is our framework for helping clients govern, evaluate, and monitor their algorithms to achieve desired outcomes, while mitigating inherent risks that arise from biased models, untraceable data lineage, and lack of explainability. The offering is underpinned by methods and technology, equipped to deploy effective governance, management evaluation, and continuous monitoring of AI and machine learning algorithms for integrity, bias, resilience, and other key performance and risk indicators.

To set up time to discuss AI with Martin Sokalski, contact Melanie Batley.

Download the report

Controlling AI: The imperative for transparency and explainability

In order for AI to move ahead toward the common good, for leaders to assume responsibility and accountability over the results, it’s essential to establish a framework to facilitate responsible adoption and scale of AI. Controlling AI is for leaders who are focused on ensuring transparent and explainable AI in their organizations. The paper also unveils KPMG’s AI in Control offering, which is our framework for helping clients understand how they should be governing algorithms to achieve desired outcomes, while mitigating or eliminating risks that come from biased models.

Other articles that might interest you






Melanie Malluk Batley

Melanie Malluk Batley

Associate Director, Corporate Communications, KPMG (US)

+1 201-307-8217


Related content

Biography

Martin Sokalski

Martin Sokalski

Principal, Emerging Technology Risk, KPMG (US)

+1 312-665-4937