Five steps toward ethical AI adoption
By Melanie Batley
Today’s business leaders are faced with a true conundrum: how can their enterprises benefit from new opportunities created through artificial intelligence (AI) while still safeguarding the well-being of employees, customers and society? Fairness, integrity, transparency, and trust are not new objectives, but they have become more complicated to achieve as machine learning assumes a larger role in how work gets done.
“At stake are business outcomes—and ultimately, the trust and confidence of your employees, your customers, regulators, and society at large,” said Todd Lohr, Principal, Advisory at KPMG and co-author of the new report, “Ethical AI: Five Guiding Pillars.”
To help guide leaders through this paradox, which is growing ever-more complicated and relevant, KPMG has identified five proposed actions for the effective governance of AI that organizational leaders can take to create and sustain more ethical enterprises:
- Prepare for structural changes and ethical workplace transformation now by helping employees adjust to the role of machines in their jobs. The rise of powerful analytics and automated decision-making will ultimately create a massive change in roles and tasks that will redefine work. Leaders need to prepare for wide-scale change management now. The workforce of the future demands a new approach to business as usual— one that is employee-centric and transparent.
- Establish clear enterprise-wide policies around the deployment of AI, including the use of data and privacy standards. Through the European Union’s General Data Protection Regulation, and the American AI Initiative, we’ve seen that the weight of educating, training, and managing an AI-enabled workforce rests with business. The sooner leaders set forth on this journey, the more influence they will have on coming initiatives and regulations.
- Build algorithms that are secure and have a strong “ethical compass.” When creating algorithms to deploy AI responsibly, security and governance of the data are crucial to the overall integrity of the model, as is the need to establish clear lines of ownership to generate accountability.
- Ensure the goal and purpose of critical algorithms are clearly defined and documented to mitigate bias. Every leader should embrace the moral imperative to mitigate bias by governing AI along its entire lifecycle—from its ideation and build to its continuing evolution—and then take new steps to manage and guide an increasingly diverse workforce as the nature of work changes.
- Create “contracts of trust.” The power and promise of AI can only be fully unleashed by our understanding and control of why we’re using it and how it’s being deployed. This is why companies need to establish an overall management policy for AI, with a focus on responsibly unleashing the power of these technologies.
“AI-driven enterprises know where and when to use AI. They have an AI compass that helps point them in the right direction for governance, explainability and value,” Lohr said.
For more information or to arrange an interview, please contact Melanie Batley.