

By Martin Sokalski, Advisory Principal, Emerging Technologies Leader at KPMG U.S. and Kelly Combs, Advisory Director, AI in Control Solution Leader
With the acceleration of digital transformation and more widespread adoption of artificial intelligence (AI), ethics and biases are leading many conversations around the future of the technology. There is an inherent fear of the unknown surrounding AI, and greater public awareness around the fact that algorithms make decisions related to personalization which is enabling consumers to ask more educated questions about how and why certain decisions are made.
It is critical that organizations carefully think through how they plan to use AI, how they will test AI and how they can identify and rectify errors and unintended outcomes. Here are three focus areas for organizations to consider as they facilitate responsible adoption and scale:
1. Data: creating a trusted data set. How can organizations say with certainty that the data used to train and test the model is not biased itself? When defining relevant data for a given model, organizations need to remember that a single variable can create bias, even if it leads to more optimal outcomes from a business perspective. Identifying and addressing biases, such as which group may be at risk of experiencing unfair outcomes, should be top of mind throughout the creation of an AI model, and that starts with the data.
In a responsible AI model, data is complete and accurate, frequency of data collection and input is consistent and meets business needs and processes are in place to address anomalies and flag irregularities that could impact model outcomes. It is important that business users are educated on the topic of representative data and provided with upskilling opportunities to understand the sources of data bias and the implications of data drift, so they are better equipped to recognize data irregularities. Additionally, the data must have appropriate user consent, permissibility and ongoing governance to help ensure its appropriate use.
2. Model: maintaining a healthy model. Is the intended design and AI solution aligned with the core business, company values and corporate responsibility to consumers? Pre-defined KPIs, goals and objectives can help assess whether the model is accomplishing what it is intended to. These business KPIs must consider impacts on consumers. As an example: if the model is intended to increase profit margins by x%, there may also be a caveat that states “without negatively impacting the consumer.”
Business users must be educated on how to determine if certain data is irrelevant to the business problem that AI is trying to solve as well as if certain data impacts the model’s outcomes. They should be empowered to flag circumstances in which outcomes deviate from company values and corporate responsibility to consumers.
3. People: explaining the how and why. As consumers demand more information on what is driving the decisions made by AI models, how can organizations help ensure explainability? Business users must have access to relevant information, including information that explains the data and variables in the model as well as information that describes the different actions and decisions of the model. Having access to how the model came to a decision can provide insights to users on why consumers may have differing promotions when purchasing identical products or why consumers see different advertisements when watching the same program.
AI is at the intersection of business value and societal change, and there is a responsibility for organizations to safeguard the design of decision-making technologies and to consider the well-being of consumers as a part of the overall strategy.
Related content