Sign up to our newsletter

Data-driven insights and news
on how banks are adopting AI

How to hook your banker on AI

Source: Adobe Firefly

10 July 2025

TODAY’S BRIEF

Many companies are touting expanded access to AI tools for staff these days. Of course, what matters is if they use them. Today, we dive into different approaches banks have taken to get their employees hooked.

Also, what’s all the hullabaloo about “context engineering,” banks’ research flex at ICML and a deep dive into the latest fraud detection use cases.

People mentioned in this edition: Chris Patterson, Jeff McMillan, José Manuel de la Chica, Anabel Almagro, Anthony Miller, Saloni Sharma and others.

Plus these banks: CIBC, Morgan Stanley, NatWest, Santander, Westpac, BBVA, CommBank and others.

The Brief is 2,611 words, a 7 minute read. Check it out online. If you were forwarded the Brief, you can subscribe here. Write us at [email protected].

– Alexandra Mousavizadeh & Annabel Ayles

TREND LINES

AI IN THE VEIN

INVITATION TO DANCE

Access to AI assistants, as reported in bank announcements, varies at select banks.

A chart showing access to AI assistants, as reported in bank announcements.

Source: Annual reports, bank announcements

Banks are rolling out AI tools, and aren't shy about saying so. Last month, Goldman Sachs gave all its employees access to a Gen AI assistant. CIBC’s Gen AI platform went wide in May. In April, NatWest gave Gen AI tools to 99% of its employees.

The hard part, alas, is so-called uptake. If you can’t get your workers to adopt the tools you are giving them at some cost – well, the tree might as well have not fallen.

In these early days of scaling AI, banks are taking different approaches to achieve the same end goal of greater use.

Call one the wide approach and the other narrow.

Going wide is believing that value comes from letting everyone experiment – training all employees on how to use AI and democratizing general use platforms so everyone has a chance to reap the productivity benefits. It seems to be working for Bank of America, which in April reported that 90% of employees were using Erica. But it has its challenges: Not everyone learns at the same pace or feels the same urgency to do so. “There is a two to three month learning curve for people to kind of figure out how to get value from it.” Chris Patterson, head of enterprise AI platforms and solutions at CIBC, told us.

The Toronto-based bank’s enterprise-wide platform currently has 6,500 daily users – roughly 14% of employees. To hit the goal of 10,000 by year’s end, Patterson is incentivizing users to build custom workflows and publish them to an AI marketplace. “Others can subscribe to them, just like I’m browsing an app store,” he said, noting that these features will let others quickly tailor the platform to their own needs and slash the learning curve. “This is how I plan to scale adoption.”

Narrowness means betting rollouts of task-specific AI tools in phases will deliver the best engagement. BBVA initially gave out 3,300 ChatGPT licenses and got its most advanced users to lead peer-to-peer workshops to encourage adoption. In November, it said 80% of those given licenses were using it once a week. By May, 80% were using it daily, and the bank upped its license count to 11,000.

At Morgan Stanley, the focus was getting tools into the right hands. You “have to give it to people who understand what good looks like,” Jeff McMillan, head of firmwide AI at Morgan Stanley, said on a podcast last month. After getting some 20,000 pieces of feedback about the assistant and using it to improve the tool, the bank says 98% of its advisor teams in wealth management now use it daily. “It can sometimes take weeks, months, and sometimes even six to nine months to deploy these things,” McMillan said. “Even in a world where it maybe only takes a week to build it.”

Bottom Line: There’s more than one way to get enterprise-wide uptake, and finding the right balance of education, incentives and organizational structure to encourage AI use is just as important as actually building the tools.

NOTABLY QUOTABLE

"In the case of autonomous agents, hallucinations don’t just lead to bad answers, they can trigger incorrect or even dangerous actions. Until we are confident these tools will not act irrationally, we must keep humans in the loop."

- Marco Argenti, CIO at Goldman Sachs, writing in Fortune, July 3

FROM THE EVIDENT AI INDEX

BANKS PUSH LLMS TO THE LIMIT

Our analysis of new research from banks shows Wall Street has one big thing on its mind when it comes to AI: How to get LLMs to take on more ambitious tasks.

As AI’s top minds descend on Vancouver next week for ICML – one of the world’s top AI research conferences – the banks there are focusing on privacy, benchmarking and efficiency of LLMs as they ready the tech to handle more challenging workloads.

Capital One, RBC and Morgan Stanley are all sponsors of the event, and join JPMorganChase and UBS in publishing papers for the event.

JPMC leads banks on accepted papers with nine, three of which have to do with privacy and security. One shows how the bank experiments with ways to keep interactions with LLMs fully encrypted – meaning that even leaked data is near impossible to crack. Five papers from financial institutions in this year’s research pool touch on new ways to judge how models perform – an enabler of agentic AI. And efficiency was the subject of four.

Conferences aren’t just about papers though. Capital One is hosting two of the event’s three Expo Workshops – coveted slots used to show off AI chops (and attract top talent). In one, the bank is demoing “MACAW,” the multi-agentic workflow it uses to power its Chat Concierge tool (see: Why agentic is so hard,” The Brief, June 26), and “Grembe,” its method of analyzing transaction data to predict fraud and model customer behavior. In the other, it’s showcasing how it evaluates LLMs and deals with uncertainty in AI.

Top line: Banks submitted more papers than ever to ICML this year. As they continue to use these events to recruit the best minds into their research departments, that’s a trend that’s likely to continue.

MARK YOUR CALENDAR

The Insurance Use Case Tracker launches July 16

COMING NEXT WEEK: The Insurance Use Case Tracker is a comprehensive inventory of 100+ AI use cases announced by the world’s largest insurers in North America and Europe. It includes a detailed description of each use case, including the firm’s reported ROI, the type of AI deployed and information on key partners or vendors.

Evident members can access the full database of AI use cases in insurance on Wednesday 16 July. Until then, check out our ranking of insurance companies and our full Key Findings Report here.

USE CASE CORNER

CAUGHT RED HANDED

WHERE THE USE CASES ARE

In the last year, the number of customer analytics, fraud detection and process automation use cases at the 70-plus banks we track grew more than 80%.

A chart showing the growth of use case focus areas in the last year

Source: Evident Use Case Tracker

In those olden days BC – Before ChatGPT – banks widely deployed machine learning to combat fraud. Now, as the world moves from generative to agentic to maybe artificial general intelligence (about which even the model makers seem unsure, see more in the “In the News” section below), anti-fraud is hot again.

As the chart above shows, fraud detection was the second-fastest growing category of use cases rolled out in the last year by banks we track. One reason why is that certain jurisdictions, like the U.K. and Australia, turned up the regulatory heat on banks to protect consumers from fraud.

The larger reason is the AI arms race between fraudsters and banks. Swindlers are using the latest AI tools to attack banks and their customers, and banks have to match them with similar sophistication. It’s no longer enough to try to spot anomalies with basic AI. The three use cases we spotlight this week take novel approaches to fighting fraud.


#1: CRISIS COMMUNICATIONS

Use Case: Real-time scam detection on calls
Line of Business: Retail & personal banking
Vendor: Pega
Bank: Westpac

Why it’s interesting: The Australian bank’s real-time Gen AI tool listens in when customers call the bank worried they may have been defrauded and prompts representatives with follow-up questions that help determine if they have or not as well as suggestions about what to do next. Separately, if customers call in asking to take drastic action, the AI can recognize language patterns that indicate they might be getting actively told what to say by a fraudster and alert the representative what’s happening.

How it works: The tool creates a transcript as the conversation happens and flags to representatives anything it deems suspicious. Based on how the call progresses, it prompts the call center agent to ask certain questions to determine the extent to which a customer is still at risk from the scam.

Impact: The tool is still in pilot phase, but it’s helping the bank deal with fraud more efficiently and even stop fraud as it happens, the bank says. Representatives using it are "stopping more scams – and doing so more quickly – than those not using it,” Westpac CEO Anthony Miller said in the bank’s release.


#2: HONEYPOTS

Use Case: Conversational AI against scammers
Line of Business: Retail & personal banking
Vendor: Apate.ai
Bank: CommBank

Why it’s interesting: This other Sydney-based bank is going on the offensive against scammers. Instead of using AI to respond to customers who have already been scammed, the bank’s Gen AI tool ties wannabe scammers up on long phone calls and conversations, gathering information that it passes on to the bank’s security team.

How it works: The bank has “thousands of AI-powered bot profiles” that try to attract scammers. Once they get one on the line, the bots – which have varied genders, tones of voice and ages and use Australian slang – aim to keep them occupied for as long as possible while they feed intel about their methods back to the bank.

Impact: The bank says the bots reduce the risk of scammers preying on vulnerable people and that the data collected during calls and texts allows them to identify scam trends and seek to fight them in their customer communications and products.


#3: NO PEEKING

Use Case: Discreet Mode
Line of Business: Retail & personal banking
Vendor: n/a
Bank: BBVA

Why it’s interesting: The Spanish bank’s tool gives customers privacy even when they’re banking in public by automatically hiding sensitive information from anyone who might be looking over their shoulder while a customer uses the app.

How it works: Once a user allows the bank’s app to access the camera, it detects when there’s more than one pair of eyes looking at the screen. When that happens, the app automatically hides sensitive account information and card balances.

Impact: The bank has only just rolled the feature out, but says it should offer some peace of mind while customers bank on the go.

Want to know more about the specific ways banks are rolling out AI? Check out our Use Case Tracker – the inventory of all the AI use cases announced by the world’s largest banks available to members.

ABOUT EVIDENT

Evident is the intelligence platform for AI adoption in financial services. We help leaders stay ahead of change with trusted insights, benchmarking, and real-time data through our flagship Banking Index, our new Insurance Index, Insights across Talent, Innovation, Leadership, Transparency and Responsible AI pillars, a real-time Use Case Tracker, community and events. Watch our latest roundtable exploring the insights from our new Insurance Index, and get in touch to hear more about how Evident can help your business adopt AI faster.

WELCOME TO THE JARGON

CONTEXTUAL PROMPTS

In this new segment, we translate and rate the latest AI vernacular.

Out with “prompt engineering,” declared AI influencer Andrej Karpathy this past week, seconding the new consensus. In its place …

Definition of context engineering

Some further exegesis is required. Context engineering is the job of managing an LLM’s “context window” – the information it reads before replying to a query. Prompt engineering was like giving your LLM a map to direct itself. But the routes to the right answers change over time, and context engineering is like equipping your LLM with a GPS that reroutes you as policies, customers or priorities adapt. This process of teaching LLMs to know how and when they should dip into different sources of information makes more sophisticated uses (read: agentic) possible.

Are banks going to hire context engineers? Unlikely. But the phrase’s recent ubiquity shows that practitioners are, at the very least, thinking about new ways to design AI systems. As Santander’s José Manuel de la Chica put it: “Are you still polishing prompts, or are you ready to engineer context and design the stage where AI will truly perform?”

Our verdict: proceed with caution with this jargon

TALENT MATTERS

EURO SUMMER

Anabel Almagro joined UniCredit as the bank’s chief data & AI officer. Among her responsibilities will be increasing the accessibility of data across the business and leveraging the bank’s partnership with Google Cloud to build and scale new AI tools, a bank spokesperson told us. Her experience as chief data officer for both Deutsche Bank and ING means she has the right experience with transforming data quality and operations and fostering a data-driven culture, the bank said.


Deutsche Bank hired Saloni Sharma as its head of data strategy and governance. It’s a return to banking for Sharma, who most recently was COO of data, AI and cloud computing at BT after previous stints at Credit Suisse, RBS and NatWest.


JPMorganChase is hiring a head of its new AI Accelerator, a team that will be embedded in different business units and is tasked with scaling “the most impactful and high-priority AI programs across the CIB.” The role reports to Daniele Magazzeni, chief analytics officer for the commercial and investment bank.


Capital One is looking for a “distinguished AI engineer” (generative AI, agentic frameworks). The bank says a big part of the job is “standardizing and automating agentic workflows.” Capital One is one of the only banks to have a customer-facing agentic tool (see: “Why agentic is so hard,” The Brief, June 26).

IN THE NEWS

THREE STORIES TO DRIVE AI CONVERSATION

The foundational model makers used to talk in near-mystical terms about their search for AGI. Never mind all that. Now those same people seem more interested in solving niche business problems with AI. Chalk that up to the difficulties of AGI and the appeal of grubby lucre. OpenAI this week announced the launch of a consulting business, following in the footsteps of Palantir by creating engineering teams that will be embedded in outside businesses. Other AI companies are hot on enterprise sales too: OpenAI alum Mira Murati’s new $10 billion startup Thinking Machines Lab, for example, is focusing on building products for companies. The marketplace for AI products for business is bound to become more vibrant — that should be good news for customers of AI vendors.


Sakana AI is taking model agnosticism to a new level. The AI lab gets LLMs to work together by letting them test and decide on their own which part of a task each is best suited for. Toggling models for different AI tools is en vogue at banks (see: “Banks play model field,” The Brief, May 29). A way to swap models out in the middle of a task could let businesses increase performance even more.


JPMorganChase is linking up with Miami-based strategic advisory group Consulting IQ to provide a Gen AI consulting platform to small and mid-sized businesses that might otherwise not be able to afford their services.

CODA

CLOSING THAT TRUST DEFICIT

How can any business – especially those dealing with people's money – get its customers to trust AI more?

It’s not a hypothetical. Customers are down on this emerging tech. Fewer than half of people around the globe trust AI, according to a recent KPMG survey. It’s the same in banking: Fewer than half of Americans said they’d trust AI tools to manage their investments, a survey from TD Bank showed last month.

Government regulations and guardrails may be part of the answer to the trust deficit problem. But for better or worse, in the U.S. and even the world’s most regulatory trigger-happy environment in the EU, the mood has turned against interference on AI.

A few others, however, are coming up with other potential solutions. Anthropic this week unveiled the Frontier Model Transparency Framework, a set of safety and security requirements it wants model-makers (and eventually governments) to buy into. The International Organization for Standardization (ISO) has been pushing for something similar with ISO42001, its version of a gold star for AI governance for any business that makes or uses it.

A year ago, banks biting on (and paying for) a voluntary disclosure scheme seemed far-fetched. Times are changing. “We’ll see more financial institutions adopting the standard,” said James Kavanagh, who led Amazon’s 12-month effort to become 42001-certified. “Because customers are expecting it.”

WHAT'S ON

COMING UP

Sun 13 July - Sat 19 July
ICML, Vancouver

Tues 15 - Weds 16 July
Momentum AI, San Jose

Mon 11 - Weds 13 August
AI4, Las Vegas

THE BRIEF TEAM

Alexandra Mousavizadeh | Co-founder & CEO | [email protected]

Annabel Ayles | Co-founder & co-CEO | [email protected]

Colin Gilbert | VP, Intelligence | [email protected]

Andrew Haynes | VP, Innovation | [email protected]

Alex Inch | Data Scientist | [email protected]

Gabriel Perez Jaen | Research Manager | [email protected]

Matthew Kaminski | Senior Advisor | [email protected]

Kevin McAllister | Senior Editor | [email protected]

Sam Meeson | AI Research Analyst | [email protected]