Why GAO Did This Study
AI generally entails machines doing tasks previously thought to require human intelligence. Its use in financial services has increased in recent years, driven by more advanced algorithms, increased data availability, and other factors. Federal financial regulators have also begun using AI tools to oversee regulated entities and financial markets.
The Dodd-Frank Wall Street Reform and Consumer Protection Act includes a provision for GAO to annually report on financial services regulations. This report reviews (1) the benefits and risks of AI use in financial services, (2) federal financial regulators’ oversight of AI use in financial services, and (3) the regulators’ AI use in their supervisory and market oversight activities. GAO reviewed studies by federal agencies, academics, industry, and other groups; examined documentation and guidance from federal financial regulators; and interviewed regulators, consumer and industry groups, researchers, financial institutions, and technology providers.
What GAO Found
Financial institutions’ use of artificial intelligence (AI) presents both benefits and risks. AI is being applied in areas such as automated trading, credit decisions, and customer service (see figure). Benefits can include improved efficiency, reduced costs, and enhanced customer experience—such as more affordable personalized investment advice. However, AI also poses risks, including potentially biased lending decisions, data quality issues, privacy concerns, and new cybersecurity threats.
Federal financial regulators primarily oversee AI using existing laws, regulations, guidance, and risk-based examinations. However, some regulators have issued AI-specific guidance, such as on AI use in lending, or conducted AI-focused examinations. Regulators told GAO they continue to assess AI risks and may refine guidance and update regulations to address emerging vulnerabilities.
Unlike the other banking regulators, the National Credit Union Administration (NCUA) does not have two key tools that could aid its oversight of credit unions’ AI use. First, its model risk management guidance is limited in scope and detail and does not provide its staff or credit unions with sufficient detail on how credit unions should manage model risks, including AI models. Developing guidance that is more detailed and covers more models would strengthen NCUA’s ability to address credit unions’ AI-related risks.
Second, NCUA lacks the authority to examine technology service providers, despite credit unions’ increasing reliance on them for AI-driven services. GAO previously recommended that Congress consider granting NCUA this authority ( GAO-15-509 ), but as of February 2025, Congress had not yet done so. Such authority would enhance NCUA’s ability to monitor and mitigate third-party risks, including those associated with AI-service providers.
The federal financial regulators are increasingly integrating AI into their general agency operations and supervisory and market oversight activities, with usage varying across agencies. The regulators use AI to identify risks, support research, and detect potential legal violations, reporting errors, or outliers. Most regulators told GAO that AI outputs inform staff decisions but are not used as sole decision-making sources.
Recommendations
GAO reiterates its 2015 recommendation that Congress consider granting NCUA authority to examine technology service providers for credit unions. GAO also recommends that NCUA update its model risk management guidance to encompass a broader variety of models used by credit unions. NCUA generally agreed with the recommendation.