top of page

AI Regulations: A Comedy of Errors on the Global Stage

10/25/24

Editorial team at Bits with Brains

Different regions are taking varied approaches to regulating artificial intelligence

Key Takeaways

  • Generative AI Risks: Salesforce’s Kathy Baxter highlights that generative AI poses amplified risks like bias and misinformation, with new challenges such as "hallucinations" and environmental concerns.

  • Mitigation Strategies: Techniques like Retrieval-Augmented Generation (RAG) can help, but the success of these methods depends heavily on clean, well-curated data.

  • Global AI Regulations: Both Baxter and PwC emphasize the fragmented nature of global AI regulations, with the EU taking a stringent approach while the U.S. remains decentralized.

  • Corporate Responsibility: Many companies struggle to implement responsible AI practices, with only 10% fully adopting key safeguards despite regulatory pressures.

  • Board-Level Engagement: Baxter advises corporate boards to take a proactive role in AI governance by involving diverse expert input and focusing on high-risk applications.

Salesforce’s Kathy Baxter on AI Regulation and Ethical AI

Generative AI Risks

Kathy Baxter, Salesforce’s Principal Architect for Ethical AI, sheds light on the growing concerns surrounding generative AI. According to her, these models not only exacerbate existing issues like bias and misinformation but also introduce new risks.


One such challenge is AI hallucinations, where the system fabricates information that appears credible but is entirely false. Another critical concern is the environmental impact—these larger models consume vast amounts of energy and water, raising sustainability issues. Compared to traditional predictive models, these risks are more pronounced and require immediate attention.


Mitigation Techniques

To tackle hallucinations and other risks, Baxter advocates for approaches like Retrieval-Augmented Generation (RAG). This technique helps reduce hallucinations by grounding AI outputs in real-world data sources. However, she stresses that even the best mitigation strategies will fail if companies don’t maintain clean, well-organized data. Poor data hygiene can undermine the effectiveness of advanced techniques like RAG.


Government Engagement

Baxter is actively involved in shaping AI regulation by collaborating with government bodies such as NIST and Singapore’s Ethical Use of AI Council. She emphasizes the importance of global harmonization in AI laws. Inconsistent regulations across different regions make compliance difficult for multinational companies. A unified approach would streamline efforts to manage AI risks.


State vs. Federal Regulation

While states like California have taken the lead with laws like SB 1047—focusing on high-risk AI applications—Baxter argues that a comprehensive federal data privacy law is essential. She believes ethical AI can only be achieved when strong data ethics are in place, which requires federal oversight rather than a patchwork of state laws.


Board Involvement in AI Governance

Baxter urges corporate boards to play an active role in managing AI risks. She recommends that boards seek input from diverse experts across fields such as security, privacy, and user research. Boards should also focus on identifying high-risk applications within their organizations and ensure robust data governance practices are in place.


PwC Report on Responsible AI Safeguards

Corporate Readiness

A recent PwC survey also paints a concerning picture of corporate readiness for responsible AI deployment. While 58% of U.S. executives have conducted preliminary risk assessments for their AI systems, only 10% have fully implemented essential responsible AI capabilities. This gap reveals a significant disconnect between awareness and action.


Challenges to Adoption

The report identifies several barriers that prevent companies from adopting responsible AI practices:

  • Quantifying risk mitigation remains difficult for 29% of respondents.

  • Budgetary constraints hinder investment in responsible safeguards (15%).

  • Leadership often lacks clarity on the importance of responsible AI (15%).

These challenges suggest that many organizations are not fully committed to implementing responsible practices despite increasing regulatory pressure.


Regulatory Pressure

With new regulations such as the EU’s AI Act and President Biden’s executive order on responsible technology use, businesses face mounting pressure to adopt stronger safeguards. The EU’s approach is particularly aggressive—non-compliance could result in fines as high as 6% of global revenue.


Awareness Drivers

Historically, public failures—such as biased algorithms—drove awareness around responsible AI practices through negative media coverage. However, there has been a shift toward internal recognition within organizations. This growing internal focus is crucial as businesses prepare for stricter regulatory environments.


Bringing It All Together

Global Regulation

Both Baxter and PwC highlight the fragmented nature of global regulations governing AI technologies

  • Global regulations for AI are highly fragmented, with different regions adopting varying approaches to governance. In Europe, the European Union (EU) has taken a prescriptive stance through its risk-based AI Act. This framework categorizes AI systems into tiers based on their potential risks to safety and fundamental rights. The EU's approach is stringent, emphasizing transparency and accountability, and it includes prohibitions on high-risk applications like real-time biometric identification in public spaces.

  • In contrast, the United States has adopted a more decentralized approach. While individual states like California have introduced specific regulations—such as SB 1047, which focuses on high-risk AI applications—federal guidelines remain largely voluntary and industry-driven. This patchwork of state-level initiatives creates a less cohesive regulatory environment compared to the EU’s unified framework.

  • Meanwhile, in Asia, countries like China have implemented targeted regulations that focus on specific technologies such as generative AI and algorithmic recommendations. China's approach is more specialized, addressing the unique challenges posed by these technologies while maintaining a focus on innovation.

Harmonization between these differing regulatory approaches is seen as crucial for businesses operating across borders.


Corporate Responsibility and Ethical Considerations

There’s growing recognition that ethical AI isn’t just about meeting regulatory requirements—it’s about managing long-term risks and maintaining public trust. However, many companies struggle with prioritizing these safeguards due to internal challenges such as unclear leadership priorities or lack of budget allocation. Moreover, solid data governance practices are essential since ethical AI fundamentally relies on high-quality data.


Some Additional Context on Global Regulatory Efforts
International Collaboration

International organizations like the OECD and G7 are working toward multilateral frameworks for harmonizing global governance around artificial intelligence. For instance, the G7's *Hiroshima Process* aims to align different national approaches to avoid regulatory fragmentation that could stifle innovation or create competitive disadvantages across regions.


Regional Differences

Different regions are taking varied approaches to regulating artificial intelligence:

  • In Europe, the EU's stringent framework emphasizes transparency and accountability while prohibiting high-risk applications such as real-time biometric identification in public spaces.

  • In Asia, China has focused its regulatory efforts on specific technologies like generative models and algorithmic recommendations.

  • The U.S., meanwhile, continues its decentralized approach but faces growing pressure from state-level initiatives and federal executive orders aimed at setting national standards without stifling innovation.


FAQs


Q: What are "AI hallucinations"?

A: AI hallucinations occur when generative models produce information that seems plausible but is completely fabricated or inaccurate.


Q: Why is clean data so important for ethical AI?

A: Clean data ensures that mitigation techniques like Retrieval-Augmented Generation (RAG) work effectively by providing accurate information for the model to reference.


Q: How does the EU's approach to regulating AI differ from the U.S.?

The EU has adopted a strict framework through its risk-based AI Act, while the U.S. remains more decentralized with state-led initiatives and voluntary federal guidelines.


Q: What role should corporate boards play in managing AI risks?

A: Corporate boards should actively oversee AI governance by involving diverse experts from fields like security and privacy while focusing on high-risk applications within their organizations.


Q: Why are companies struggling to adopt responsible AI practices?

A: Many companies face barriers such as difficulty quantifying risk mitigation, budgetary constraints, and unclear leadership priorities regarding the value of responsible safeguards.


Sources:

[1] https://www.emergingtechbrew.com/stories/2024/07/18/kathy-baxter-salesforce-principal-ai-ethicist-regulation

[2] https://www.salesforce.com/blog/6-lessons-learned-creating-the-role-of-ai-ethicist/

[3] https://www.emergingtechbrew.com/stories/2024/08/20/responsible-ai-safeguards-pwc-report

[4] https://profiletree.com/ai-regulations-around-the-world-comparative-study/

[5] https://keymakr.com/blog/regional-and-international-ai-regulations-and-laws-in-2024/

[6] https://www.infosecurity-magazine.com/opinions/global-ai-regulatory-cisos/

[7] https://www.csis.org/blogs/strategic-technologies-blog/ai-regulation-coming-what-likely-outcome

Sources

bottom of page