
Bits With Brains
Curated AI News for Decision-Makers
What Every Senior Decision-Maker Needs to Know About AI and its Impact
Teaching AI Some Manners: A KYC Approach to Keeping Bots in Check
5/18/24
Editorial team at Bits with Brains
The rapid advancement and adoption of artificial intelligence (AI) technologies across industries has raised concerns about the potential risks and negative impacts of AI systems. A recent survey by Stanford University revealed that 52% of Americans are concerned about the negative impact of AI.

The rapid advancement and adoption of artificial intelligence (AI) technologies across industries has raised concerns about the potential risks and negative impacts of AI systems. A recent survey by Stanford University revealed that 52% of Americans are concerned about the negative impact of AI.
This includes exploitation by bad actors, unintended consequences, as well as issues such as algorithmic bias, privacy violations, lack of transparency and explainability. To address these challenges and ensure the responsible development and use of AI, some experts have proposed applying Know Your Customer (KYC) principles, traditionally used in the financial sector for anti-money laundering compliance, to AI governance.
KYC refers to the process of verifying the identity of clients and assessing their suitability, along with the potential risks of illegal intentions towards the business relationship. In the context of AI, a KYC-inspired approach would involve implementing mechanisms to verify and monitor the entities developing and deploying AI systems, as well as assessing the potential risks and impacts of these systems.
One key proposal is for governments to require compute providers, who supply the computational resources needed to train and run large AI models, to implement KYC schemes. This would enable greater public oversight of frontier AI development by identifying potentially problematic projects and high-risk entities. Compute providers would be required to verify the identity of their clients, keep records, and report high-risk AI development activities to authorities. This could also help close loopholes in existing export controls on sensitive AI technologies.
For individual organizations looking to implement AI, a KYC-based governance framework could involve several components:
Verification and risk assessment of AI developers and vendors to ensure they adhere to responsible AI principles and practices. This may include background checks, assessments of technical capabilities and processes, and reviews of past projects.
Implementing ongoing monitoring and auditing mechanisms to track the performance and impacts of deployed AI systems over time. This enables early detection and mitigation of issues like performance drift, unexpected behaviors, or fairness concerns. Continuous monitoring is key as AI systems can evolve and adapt based on new data.
Conducting thorough risk assessments and impact evaluations prior to deploying high-stakes AI systems. This includes defining acceptable risk thresholds, identifying affected stakeholders, and putting in place controls and contingency plans. Risks assessments should cover technical robustness, safety, privacy, transparency, fairness, and societal impacts.
Establishing clear governance structures, policies, and accountability measures around AI development and use. This may involve appointing AI ethics boards, defining roles and responsibilities, setting standards and guidelines, and implementing grievance and redressal mechanisms. Governance measures should enable appropriate human oversight.
Ensuring transparency and clear communication to both internal and external stakeholders about how AI systems operate, make decisions, and what their limitations are. This is critical for building organizational and public trust in AI. Explainability measures are also important for troubleshooting and auditing.
While a KYC-based approach to AI governance is promising, it is not without challenges.
Verification and ongoing monitoring can be complex and resource-intensive, especially given the rapid pace of AI development. There may be resistance from some AI developers towards perceived restrictions on innovation. Harmonizing KYC requirements across different geographies and computing platforms could also prove difficult.
However, despite these hurdles, implementing KYC principles in AI governance is a step towards more responsible and trustworthy AI. For organizations, it demonstrates a commitment to ethics and social responsibility. It can enhance public trust, mitigate brand and legal risks, and drive more sustainable value from AI investments. Regulators are also increasingly expecting enterprises to have robust AI governance and risk management practices in place.
In conclusion, C-level executives looking to leverage AI in their organizations should seriously consider adopting a KYC-inspired governance framework. This involves thoroughly vetting AI developers and solutions, implementing risk assessment and monitoring processes, establishing clear accountability and oversight measures, and maintaining transparency. By proactively governing AI through a KYC lens, enterprises can better harness the benefits of AI while navigating its challenges and risks.
Sources:
[1] https://www.holisticai.com/blog/need-for-risk-management-in-ai
[2] https://www.ibm.com/topics/ai-governance
[3] https://advertisingweek.com/how-ai-is-used-in-kyc-processes/
[4] https://arxiv.org/abs/2112.01237
[6] https://transcend.io/blog/ai-governance-framework
[7] https://fintech.global/2024/03/27/how-ai-transforms-kyc-into-a-continuous-compliance-powerhouse/
[8] https://www.robustintelligence.com/ai-risk-management
[9] https://www.techtarget.com/searchenterpriseai/definition/AI-governance
[11] https://www.energy.gov/ai/doe-ai-risk-management-playbook-airmp
[12] https://securityintelligence.com/articles/ai-governance-framework-ethics/
[14] https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF
[15] https://www.splunk.com/en_us/blog/learn/ai-governance.html
[16] https://c3.ai/glossary/artificial-intelligence/know-your-customer-kyc/
[17] https://kpmg.com/ae/en/home/insights/2021/09/artificial-intelligence-in-risk-management.html
[19] https://www.nist.gov/itl/ai-risk-management-framework
[21] Stanford University, HAI Artificial Intelligence Index Report 2024
Sources