Trust and responsibility in the age of AI

By Stephanie Trefcer

The speed, pace of adoption, and convergence of artificial intelligence technology is a big challenge for boards when it comes to governance and oversight. When you think about the spotlight on how to manage public perception and regulatory scrutiny, it falls on corporate directors. In fact, 96% of corporate directors report a huge risk and oversight challenge that overwhelms them in the roles that they perform.

As organizations transform themselves and integrate AI, it’s imperative that they have trust and transparency in the technology. Many of these emerging technologies are taught, not programmed, using historical data or being tutored by subject matter experts. As a result, when they are deployed, an inherent bias is likely, as there can be unintended security consequences in the filters or outputs delivered.

As an example, searching resumes for a job role that has been historically male, if the AI algorithm uses historical data, it will, unfortunately, continue that bias and fewer women could show up as candidates for the position. 

So when we talk about AI processes, why should boards or management care? Why is it so important? There are two words to answer that: Trust and Responsibility. For companies to truly recognize the potential of AI, business leaders need to be confident that these AI technologies can be monitored, controlled, and managed to mitigate early security and reputation risks. To do this, leaders must have business conversations, and be confident in front of their stakeholders, customers, and employees when making decisions to incorporate all aspects of the AI lifecycle.

“Leaders need to look at their stakeholder community, including stakeholders, clients and employees, and know that they deployed AI technologies in a responsible way. The confidence required by the c-suite and the board to do this makes this a business conversation, not a technology conversation,” said Vinodh Swaminathan, Principal, Innovation and Enterprise Solutions.

What KPMG’s newly launched AI in Control framework can do for leaders

KPMG’s AI in Control provides an end to end method, framework, and tools to help ensure that AI solutions have integrity, are explainable, and are free from prejudice. AI in Control also focuses on model agility to be effectively used across the enterprise to drive confidence in decision making.

“The true art of the possible for Artificial Intelligence will become unlocked as soon as there is more trust and transparency. This can be achieved by incorporating foundational AI program imperatives like integrity explainability, security, fairness and agility which are the premise behind our offering,” said Martin Sokalski, Global Emerging Technology Risk network leader and US Principal.

At IBM THINK 2019, Vinodh Swaminathan, Martin Sokalski and their team will be talking about how KPMG is helping organizations unlock the value of AI while ensuring they have a view into the entire AI lifecycle, using the AI in Control framework.

Can’t be at IBM Think live? Follow @KPMG_US News for the latest information on AI explainability and the launch of AI in Control.

To learn more about AI in Control and to speak with Vinodh Swaminathan, please contact Christine Curtin.

Stephanie Trefcer

Stephanie Trefcer

Senior Associate, Communications, KPMG US

+1 201-505-6844

Related content


Vinodh Swaminathan

Vinodh Swaminathan

Global Lead Partner & Principal, KPMG US

+1 203-940-1284
Martin Sokalski

Martin Sokalski

Principal, Advisory, Digital Lighthouse, KPMG US

+1 312-665-4937