By Melanie Batley
The power and promise of artificial intelligence (AI) can only be fully unlocked when we gain greater trust, transparency, and control over the technology, algorithms, and data. According to a new report by KPMG, Controlling AI, this is why companies need to institute effective governance methods for AI, with a focus on responsibly unleashing the power of these technologies.
In our research, we found that most leaders aren’t clear on what an AI governance approach should be, and that existing governance constructs are poorly rated for their capabilities and readiness to support AI. We aim to work with industry leaders to solve for that challenge and enable greater confidence and adoption of AI at scale.
“We believe that in order for leaders to assume responsibility and accountability over the results of their AI, they will need to have confidence in the technology and a framework that facilitates transparency and explainability. This new confidence and transparency will drive greater adoption and scale of AI across the organization and industry overall,” said Martin Sokalski, Partner, Emerging Technology Risk Services, and co-author of the report.
“The cost of getting AI wrong extends beyond the financials such as lost revenue and fines from compliance failures, to reputational, brand, and ethical concerns, and customer trust,” Sokalski said.
KPMG’s original framework is guided by four key objectives and AI trust imperatives: integrity, explainability, fairness, and resilience.
The paper introduces KPMG’s new AI in Control solution, which is our framework for helping clients govern, evaluate, and monitor their algorithms to achieve desired outcomes, while mitigating inherent risks that arise from biased models, untraceable data lineage, and lack of explainability. The offering is underpinned by methods and technology, equipped to deploy effective governance, management evaluation, and continuous monitoring of AI and machine learning algorithms for integrity, bias, resilience, and other key performance and risk indicators.
To set up time to discuss AI with Martin Sokalski, contact Melanie Batley.