Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

KPMG Trusted AI and the Regulatory Landscape

This page is frequently updated with KPMG's latest perspectives on the evolving AI regulatory environment, including:

  • Information and quotes from KPMG U.S. leaders on the EU A.I. Act
  • Information and quotes from KPMG U.S. leaders on the White House (U.S.) Executive Order on A.I.
  • Addtional resources for more perspectives from KPMG U.S. leaders on A.I. regulation

The EU A.I. Act

March 2024 Update

On March 13, The European Parliament cast a plenary vote on the EU Artificial Intelligence Act. The vote represents a crucial step toward passing comprehensive AI regulation.    

The EU AI Act takes a risk-based approach to regulating AI, with a focus on high-risk applications, while also including transparency requirements for general-purpose AI models and AI-based tools like deepfakes and AI chatbots.

U.S. multi-national corporations with operations in the EU that meet the criteria of the regulation will be required to comply. For many organizations, this will likely require building compliant AI governance programs across their enterprise – and across geographic borders – due to shared infrastructure at global corporations. Non-compliance with the Act is expected to lead to significant fines, particularly for  companies deploying prohibited AI technologies.

The EU AI Act will also set a global standard, and we expect similar regulation focused on protecting fundamental rights, promoting transparency and requiring impact assessments for high-risk applications may be considered by other jurisdictions.  Provisions of the legislation are coming into effect in late 2024, with the majority of provisions not coming into effect until at least 2025.

KPMG recommends that US businesses take action now, to understand the regulation and how it applies to them, assess their risks and potential impacts, and implement the necessary governance and compliance measures within their operations to meet the emerging requirements out of the EU and ensure they are well positioned for future US regulatory requirements.

In response to the vote, KPMG leaders issued the following statements:

“The EU AI Act will have far-reaching implications not only for the European market, but also for the global AI landscape and for U.S. businesses.  It will set a standard for trust, accountability and innovation in AI, and policymakers across the US are watching. U.S. companies must ensure they have the right guardrails in place to comply with the EU AI Act and forthcoming regulation, without hitting the breaks on the path to value with generative AI.” 

-- Steve Chase, Vice Chair of AI & Digital Innovation, KPMG U.S.

~~~~~

"This vote on the EU AI Act marks a significant milestone in shaping the global landscape of Trusted AI regulation, which, once mature, will foster public confidence and ensure ethical practices in AI development. The role of rule-makers in driving trust and responsible innovation is essential and aligned with KPMG’s ongoing commitment to designing, building and deploying AI systems in a reliable, ethical and collaborative manner."

-- Tonya Robinson, Vice Chair and General Counsel - Legal, Regulatory and Compliance, KPMG U.S.

~~~~~

“The introduction of the EU A.I. Act serves as a clear signal that any US multinational operating in the European market should be proactively considering and preparing for these new regulations. The Act underscores the importance of fundamental rights impact assessments and transparency, which will undoubtedly impact how businesses operate and deploy AI technologies. By embracing these regulations and incorporating responsible AI practices, US multinationals can not only ensure compliance but also foster trust and maintain a competitive edge in the European market.

“Now is the opportune moment for U.S. businesses to adopt Trusted AI programs in response to ongoing global and U.S. regulations. These programs will enable companies to swiftly evaluate and account for the risks and vulnerabilities. It is imperative for organizations to progress from mere planning to the practical implementation of ethical principles, establishing responsible governance, policies, and controls that align with leading frameworks and emerging regulations. The integration of responsible and ethical AI should be ingrained throughout the entire AI lifecycle.”

--Bryan McGowan, Trusted AI Leader, KPMG U.S.

~~~~~

“The EU AI Act is a landmark regulation, not just in the EU but globally - similar to the adoption of GDPR.  Whether at the state or Federal level, US policymakers will surely consider a range of foundational principals contained within the EU AI Act – fairness, explainability, data integrity, security and resiliency, accountability, privacy, and risk management – under existing rules and authorities as well as in future legislation and regulation. The time for simply establishing sound risk governance and risk management AI programs is quickly passing – the time for implementing, operationalizing, demonstrating and sustaining effective risk practices is now.” 

-- Amy Matsuo, Principal & National Leader Regulatory Insights, KPMG U.S.

_____________________________

The EU A.I. Act

December 2023 Update

In December 2023, The European Council and Parliament reached a historic agreement on the world's first set of rules for artificial intelligence (AI). The Artificial Intelligence Act aims to establish a regulatory framework for the development and use of AI that prioritizes safety, transparency, and accountability. The act will apply to a wide range of AI applications, including those used in healthcare, transportation, and public safety.

The act sets out strict requirements for AI developers and users, including mandatory risk assessments and transparency obligations. It also establishes a European Artificial Intelligence Board to oversee the implementation of the act and provide guidance on AI-related issues. The agreement represents a significant step forward in the regulation of AI and the promotion of responsible AI development and use, and is expected to have a major impact on the global AI industry.

The EU AI Act is expected to have a significant impact on US-based companies, particularly in terms of compliance costs, strategic business shifts, and balancing transparency with intellectual property protection. The act mandates a risk-based approach to regulation, requiring businesses to navigate a range of requirements and obligations depending on the AI system's risk classification. Companies using AI systems in prohibited areas may need to reassess their product or business strategies or pivot, adjust, or discontinue their products or services accordingly. Additionally, the act imposes a significant administrative burden on companies, requiring thorough documentation and measures taken for oversight and control for high-risk AI systems.

To navigate these challenges, KPMG recommends that US businesses closely monitor regulatory developments, assess their risks and potential impacts, and implement necessary governance and compliance measures within their operations.

Reflecting on the new proposed law, KPMG issued a number of statements:

“The provisional EU A.I. Act will establish comprehensive guardrails on the AI highway, and it will influence what’s to come. Organizations must implement Trusted AI and modern data strategies in order to go faster with confidence. 

 "The EU’s landmark Artificial Intelligence Act marks a pivotal moment for U.S. businesses navigating the evolving AI landscape. We anticipate it will be extremely influential on the AI regulatory environment, reaching far beyond the tech sector, akin to the impact of GDPR on data privacy. 

 “Companies should be monitoring what is happening in the EU closely to assess potential impacts and implement the necessary governance and compliance measures. Rapid assessment of emerging regulatory challenges will enable organizations to reduce disruption, minimize risks and ensure a smoother adoption of AI. While it will require up-front investment in Trusted AI programs and sophisticated data governance, this preparation will accelerate greater, sustained returns. 

 “KPMG’s global Trusted AI commitment guides our firm’s aspirations and investments in AI, and its ethical pillars align closely to emerging requirements in the EU A.I. Act. However, KPMG’s Trusted AI approach is not limited to addressing regulatory compliance. It was purpose-built to accelerate the value of AI for our clients and our firm while serving the public interest and honoring the public trust."

-- Steve Chase, Vice Chair of AI & Digital Innovation, KPMG U.S.

~~~~~

“While the trust of our clients and our people is always the north star for our AI aspirations, the provisional EU A.I. Act underscores just how pivotal public confidence will be to any successful AI investment. KPMG’s Trusted AI ethical pillars prioritize Accountability, Safety, Data Privacy, Transparency and Fairness, all well aligned to the EU A.I. Act’s stated intentions."

-- Tonya Robinson, Vice Chair and General Counsel - Legal, Regulatory and Compliance, KPMG U.S.

~~~~

“With continued global and US regulation on the horizon, now is the time for U.S. businesses to implement Trusted AI programs to quickly assess and understand risks and exposures. Organizations must move from planning to operationalizing ethical principles into practice and institute responsible governance, policies and controls aligned to leading frameworks and emerging regulations. Responsible and ethical AI must be embedded by design across the AI lifecycle.

“The provisional EU A.I. Act, as agreed on December 8, underscores the importance for companies to invest in fundamental rights impact assessments to better understand the potential impact of their actions, policies, and technology use on the basic rights of their customers, employees and broader society. Companies using AI systems in high-risk or prohibited areas may need to reassess product or business strategies.

 “As with any new regulation, there will be complex issues that business must navigate. The EU A.I. Act is on track to set a high bar for transparency and disclosure. This transparency is critical to maintain trust in AI, but it will also require businesses to find the right balance between disclosure and protecting trade secrets."

-- Bryan McGowan, US Trusted AI Leader, KPMG U.S.

_____________________________

The White House (U.S.) Executive Order on A.I.

On October 30, 2023, the White House announced an Executive Order on AI, aimed at establishing a framework for the development and use of AI that prioritizes safety, security, and privacy. It also calls for the creation of a National AI Advisory Committee to provide recommendations on AI-related policies and research priorities.

The executive order emphasizes the importance of transparency and accountability in AI development and deployment, particularly in areas such as healthcare, transportation, and national security. It also directs federal agencies to prioritize the development of AI technologies that promote economic growth and job creation while minimizing potential negative impacts on workers and communities. Overall, the order aims to take a significant step forward in the regulation of AI and the promotion of responsible AI development and use.

In response to the news, KPMG released a number of statements:

“The announcement of an Executive Order to promote safe, secure and trustworthy AI, only furthers the need for organizations to balance innovation, efficiency, and value with appropriate considerations and safeguards to govern, secure and operate AI.” 

-- Bryan McGowan, US Trusted AI Leader, KPMG U.S.

~~~~~

“Organizations will without a doubt be looking to this Executive Order for broader signals on where the U.S. regulatory landscape is headed. To adhere to current requirements and prepare for future regulations, leading companies are instituting Trusted AI programs that embed clear guardrails across the organization and continually adapt to address new, evolving risks. With the right governance, policies, and controls, organizations can strike the right balance between being bold, fast and responsible to accelerate the value of AI with confidence.”

-- Steve Chase, A.I. and Digital Innovation Vice Chair, KPMG U.S.

~~~~~

“As AI – and generative AI – continue to gain momentum, the concerns around ethics, data, and privacy related to these technologies are clearly priorities of the Administration. While efforts to regulate AI both domestically and internationally progress, tech companies will need to be vigilant about continuously assessing their approach to R&D and deployment of their products and services that both meet regulatory expectations and maintain their competitive edge.”

-- Mark Gibson, Global and U.S. Technology, Media & Telecommunications Leader, KPMG

~~~~~

“As the Executive Order reaffirms, AI, including GenAI, cuts across principles of safety and security, privacy, civil rights, consumer and worker protections and innovation and competition. Assessing both rewards and risks will be critical to innovation while maintaining trust. Legislative and regulatory debates both domestically and internationally need to be monitored closely as these evolve. And companies must now set appropriate risk and compliance guardrails, recognize ‘speedbumps,’ and leverage industry-based and other best practices, following robust risk standards around such areas as testing, authentication and outcomes. Some regulators have already made it clear that existing authorities and regulations apply to ‘automated systems,’ including algorithms, AI, predictive analytics and innovative technologies – the time for sound risk governance and risk management is now.”

-- Amy Matsuo, Principal & National Leader Regulatory Insights, KPMG U.S.

close
Contributors
close
Media contacts

Additional resources

KPMG Trusted AI
Webcast Replay Webcast Upcoming Listen Now

KPMG Trusted AI

At KPMG, we are committed to upholding ethical standards for AI solutions that align with our Values and professional standards, and that foster the trust of our clients, people, communities, and regulators.

Webcast Replay Webcast Upcoming Listen Now

Landmark Actions Coming: The AI Act and Growing US Regulations

“Whole-of-government” actions increasing as agencies intensify their focus on safe, secure, and trustworthy AI/GenAI

trusted ai
Webcast Replay Webcast Upcoming Listen Now

Trusted AI services

Unlock the vast potential of artificial intelligence with a trusted approach.

Webcast Replay Webcast Upcoming Listen Now

Decoding the EU AI Act

A guide for business leaders

Explore more

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline