Data-driven insights and news
on how banks are adopting AI
3 October 2024
Ethan Mollick, co-director of the Generative AI Labs at Wharton and the author most recently of Co-Intelligence: Living and Working with AI, spoke with Evident data scientist Alex Inch about all things AI: the pace of adoption, when experience matters, and how to foster experimentation.
Read his conversation with Evident data scientist Alex Inch below.
INCH: Two years ago, you said generative AI is coming “faster than most people expect.” Looking back, how does that prediction hold up?
MOLLICK: Actual end user adoption has been insanely high, certain surveys that show 65% of marketers, 60% of coders are using Gen AI. But businesses don’t necessarily capture all of that value – that requires rethinking processes and approaches. If I increase everyone’s productivity by 20%... that’s awesome, but how do I as a firm actually collect on that? One way is to fire people, but if you fire people they won’t show you how they’re using AI.
How should banks get employees to experiment with AI?
You have to incentivize it. If employees are worried colleagues will lose respect for them for using AI, or the bank won’t reward them for using AI, or they will get let go because they’ve got AI to do 90% of their tasks, then they’ll never show you how they use it. AI access helps, sharing helps, some education/training helps, as well as showing support at the highest levels of the organization.
You’ve recently said that senior leaders that use AI are more effective at driving AI adoption internally. What use cases make sense for senior leaders if they want to dip their toes?
The research shows that juniors don’t know anything special about AI: they use it first, but they don’t learn specialized knowledge from using it. Seniors are actually the best able to use AI because they have the expertise, and they can get a sense very clearly of when the AI knows something or if it’s making it up. So the reason to use it is to understand what this thing actually does. If you just outsource that, you're not gonna get the right answers. JPMorgan's talked about how they have AI whisperers assigned to senior level executives, whose job it is to see how much of their job they can automate. But it’s less about helping senior executives, and more about them being hands-on with what this thing does.
A couple of weeks ago, we saw OpenAI release GPT-o1, a new model that can pause and think. How important is it for business?
Nobody knows. We don't know what o1's good or bad at yet, no one knows anything about models on release. The question I always ask is: “How is your business figuring out what o1’s useful for?” o1's also a good indicator of where things are going. We've shown that you can scale inference as well as scaling training. That means thinking time as well as just training time. That means it's going to solve a lot of problems you didn’t expect it to solve initially.
We’ve started to see tech budgets tick up. Does o1’s higher expense preclude benefits?
So I wonder how much of that, how much of that cost, is actually going to API calls, and how much is going into projects in the IT department to build another RAG-based document search solution. There's a difference between these two. A lot of banks are turning towards their IT department and saying: “do the AI thing”. But there's no reason the IT groups should be experts on what the AI thing is. It doesn't work like code, it works like a person. They build the same four or five solutions every time. I think that's very different than going deep in your bank trying to figure out is this useful for this analyst role? Is this useful for this regulation role? And if it's useful, it's probably profitable on a one on one basis. So there has to be experimentation.
Some expense will be going into data engineering and building pipelines too
That's the mistake. That mattered when you were doing large scale machine learning, and building your own models. When you're using a pre-trained model like GPT-4o, why do you need data?
One use might be for RAG approaches…
RAG is one narrow use case and probably the worst use case for AI. With RAG you still have hallucination issues, but all kinds of other really complicated problems emerge. People go to RAG because it feels safe and it feels like how you use AI systems. That's not how you gain high value from this right now. So what, you're searching over documents. Why have a RAG retrieval system when you can actually build a normal retrieval system, or at least have one that works by chunking and stuff like that, like it builds knowledge graphs for you. A lot of the consulting and other dollars have a huge overhang from Big Data & AI/machine learning stuff from the last 10 years which is valuable, but very different from what you can do with a large language model.
Does that mean it won’t ever make sense for a bank to pretrain their own model? Looking at what Bloomberg did with BloombergGPT – which didn’t pan out.
BloombergGPT was a great model. It's a 200 zettaflop model, so it was trained on 2x1023 flops of data. It was good at what it did. The problem was GPT-4, which is a 20 yottaflop model - 100 times bigger - it’s just better at everything than a smaller model. So it wasn't that BloombergGPT was terrible. They spent probably $10m on it. It's just that the bigger model always wins.
That sounds like we’re headed towards even bigger models. Is there a place for smaller, fine-tuned models?
Fine-tuned models are fine for some stuff, but not for intellectual work. I’m sure you can use smaller fine tuned models for narrow cases, for customer service and other kinds of stuff. But the truth is, we have to decide, is it worth paying six cents an hour for a PhD student to do the work, or a 10th of a cent to have a high school student do the work? Maybe in some cases we want the high school student but most of the time for high end functions, the intelligence matters.
In your book you discuss GenAI devaluing written references. Do you have any other thoughts on subtle ways that GenAI will disrupt the social fabric at banks?
Banks are being hollowed out from the inside in two different major ways right now. One of them is that their talent pipeline just broke because it's all an informal apprenticeship system, and now all your interns are using AI, and all the managers are using AI, and so nobody's learning anything. And then the second big thing is we judge the quality of someone by the quality of their writing and the amount of their writing and their lack of errors in their writing. Now everyone can produce brilliant writing. What does that mean? What does it mean when all your internal documents are AI-created? We have a lot of questions to answer.