Skip to content

Financial institutions and AI collaborators need to jointly uphold safety measures, asserts Hsu from the Office of the Comptroller of the Currency (OCC).

"Michael Hsu, Acting Comptroller, claimed that AI technology makes avoiding accountability for negative outcomes more feasible compared to other modern technologies."

Financial institutions and their artificial intelligence collaborators should jointly assume...
Financial institutions and their artificial intelligence collaborators should jointly assume responsibility for ensuring safety, according to the Office of the Comptroller of the Currency's (OCC) top official, Hsu.

Financial institutions and AI collaborators need to jointly uphold safety measures, asserts Hsu from the Office of the Comptroller of the Currency (OCC).

The world of banking and finance is exploring new territory as AI technology becomes increasingly prevalent. This shift, however, comes with its own set of challenges and risks, as highlighted by Acting Comptroller of the Currency, Michael Hsu.

Hsu warns that an increase in AI-powered fraud could lead to a broader distrust in payments and banking. He notes the proliferation of deepfakes and their potential for more significant financial impact as criminals advance in using AI. To address these concerns, the Treasury Department is seeking public comments on the use of AI in the financial sector.

Sens. Mark Warner and John Kennedy introduced legislation in December aimed at protecting markets from the potential disruptive impacts of deepfakes, trading algorithms, and other AI tools. The U.S. Artificial Intelligence Safety Institute, within the National Institute of Standards and Technology, could devise a shared responsibility framework for AI, with over 280 stakeholder organizations.

Hsu suggests that a shared responsibility model for AI safety, similar to the one used in cloud computing, could be developed for AI. He believes that AI holds promise and peril for financial stability. Banks adopting AI must set up "clear and effective gates" between phases to ensure safety.

One example of unexpected consequences from over-reliance on AI is a case where a chatbot suggested a bereavement flight refund that Air Canada does not offer. Companies may struggle to identify liability and fix issues with chatbots and AI, as seen in credit underwriting decisions that are hard to explain.

Regulatory bodies like the Financial Stability Oversight Council (FSOC) and the Treasury Department are focusing on risk mitigation frameworks that promote transparency, auditability, accountability, and resilience in AI use within financial institutions. Industry-led frameworks, voluntarily adopted standards, and evolving regulatory guidance currently form the basis of managing AI risks in the sector.

Key frameworks and initiatives include the RBI’s FREE-AI Framework, Martini.ai’s Financial Autonomy Ladder, Bank of England AI Governance, and Industry Best Practices in Responsible AI. These frameworks emphasize multi-stakeholder collaboration, continuous risk management, and ethical principles.

Treasury Secretary Janet Yellen has backed this legislative push, warning that using AI in finance carries significant risks. She, like Hsu, believes that trust is crucial in banking, and AI adoption may be limited due to trust factors.

In summary, shared responsibility frameworks in banking and finance AI governance are increasingly formalized, focusing on multi-stakeholder collaboration, continuous risk management, and ethical principles. Institutions are encouraged to adopt comprehensive governance frameworks ahead of formal regulation to ensure safe, ethical, and effective AI deployment. Before pursuing the next phase of AI development, banks should ensure that proper controls are in place and accountability is established.

[1] RBI’s FREE-AI Framework: https://www.rbi.org.in/scripts/AnnouncementDisplay.aspx?Id=13232 [2] Martini.ai’s Financial Autonomy Ladder: https://www.martini.ai/resources/financial-autonomy-ladder [3] Bank of England AI Governance: https://www.bankofengland.co.uk/prudential-regulation/ai-and-machine-learning [4] Industry Best Practices in Responsible AI: https://aws.amazon.com/ai/responsible-ai/ [5] Operationalizing AI Risk Governance: https://www.mckinsey.com/business-functions/risk/our-insights/operationalizing-ai-risk-governance

Read also:

Latest