FIS and Anthropic Bring Agentic AI to AML Investigations
- 11 minutes ago
- 2 min read

FIS and Anthropic are teaming up to develop a Financial Crimes AI Agent designed to help banks accelerate anti-money-laundering investigations and prioritize higher-risk cases.
The new agent will use Anthropic’s Claude models as the reasoning layer while operating within FIS’s banking technology environment. According to the announcement, the system is intended to reduce AML investigation timelines from hours to minutes by automatically gathering evidence across a bank’s core systems, assessing activity against known financial-crime typologies, and escalating the most relevant cases for investigator review.
BMO and Amalgamated Bank are expected to be among the first institutions to deploy the technology, with broader availability planned for the second half of 2026.
The partnership reflects a broader shift in financial services from AI tools that simply assist employees toward agentic systems that can take on structured workflows under governance. FIS will provide the data platform, governance layer, deployment infrastructure, and client relationships, while Anthropic’s Claude models will support the agent’s reasoning capabilities. Anthropic’s Applied AI team and forward-deployed engineers are also working with FIS to co-design the agent and help transfer knowledge so FIS can develop additional agents independently over time.
Stephanie Ferris, CEO and president of FIS, described the move as a step toward “AI that acts, not just assists,” positioning the Financial Crimes AI Agent as an early example of how banks may adopt governed, enterprise-grade AI agents for high-stakes operations.
The announcement comes as banks face growing pressure to modernize financial-crime operations, reduce manual review burdens, and improve detection quality. Industry analysis has highlighted AML and KYC as areas where agentic AI could help address fragmented data, low automation rates, and time-consuming investigative workflows.
However, the success of such systems will depend not only on speed, but also on explainability, auditability, model governance, and the role of human investigators in final decision-making. In regulated banking environments, AI agents will need to demonstrate that they can improve operational efficiency without weakening accountability or oversight.
Reference: Finextra
