Getting control of artificial intelligence

By Martin Sokalski

AI has become the latest buzzword to encompass anything a company may be doing from automated intelligence, to augmented intelligence, to real true artificial intelligence (AI). While the adoption at wide scale of true artificial intelligence is happening at only about 5 percent of major companies, more players are dipping their toe in the AI waters, and it’s important they see what’s below the surface, as well as the ripple effect, as they embark on the journey.

It’s essential that discussions around AI processes move from a technology conversation, to a strategy, governance, and risk conversation to drive greater confidence and transparency across the internal and external stakeholder community.  Leaders and decision makers will be more confident about their AI and adoption will accelerate and scale.

As organizations make key decisions that will affect the future of their business, confidence in AI will become paramount.  In order to do that, business leaders need to be certain that AI can be effectively governed, managed, monitored, and inherent risks can be mitigated. 


Implementing governance around the full AI lifecycle is as integral to the process as developing and managing the AI itself.  It helps address the capabilities, or lack thereof, as well as introduces the concept of trust and transparency around any AI process.

To help address these risks, and integrate a governance model into AI lifecycle, we have developed a comprehensive framework, methodology and tooling which we call AI-in-Control.

The intent of this solution is to address gaps that exist today and accelerate responsible adoption of AI.  This solution can be divided into four foundational pillars of trust – Integrity, Explainability, Fairness, and Resilience. To improve adoption at scale and begin realizing some of the AI promise, Boards and the C-suite, need to consider these four pillars as they build and grow their AI programs and models.

KPMG is committed to helping clients responsibly adopt emerging technologies in responsible fashion across a broad spectrum of AI – IoT, Intelligent Automation, AI, RPA, machine learning and work with clients as they manage, monitor and develop AI processes and as they engage across the stakeholder ecosystem including regulators, data scientists, legal, ethics and compliance, internal audit, and IT.

To set-up time to talk to Martin in more depth, please contact Melanie Batley.

 

Resources

Artificial intelligence in control

Read more

 

Other articles that might interest you






Martin Sokalski

Martin Sokalski

Principal, Emerging Technology Risk, KPMG US

+1 312-665-4937


Related content

Media contact

Melanie Malluk Batley

Melanie Malluk Batley

Associate Director, Corporate Communications, KPMG US

+1 201-307-8217