Wells Fargo CIO: AI and machine learning will drive financial services forward

We’re excited to bring Transform 2022 back in person on July 19 and pretty much July 20-28. Join AI and data leaders for insightful conversations and exciting networking opportunities. Register today

It’s simple: In financial servicescustomer data offers the most relevant services and advice.

But often people use different financial institutions according to their needs – their mortgage with one; their credit card with another; their investmentssavings and checking accounts with yet another.

And in the financial sector, more than in others, institutions are notoriously locked up. Largely because the industry is so competitive and highly regulated, there hasn’t been much incentive for institutions to share data, collaborate, or collaborate in an ecosystem.

Customer data is deterministic (i.e., relying on first-person sources), so with customers “living with multiple parties,” financial institutions can’t form a precise picture of their needs, said Chintan Mehta, CIO and head of digital technology and innovation at Wells Fargo.

“Fragmented data is actually harmful,” he said. “How do we solve that as an industry as a whole?”

While Mehta and his team argued for ways to solve this customer data challenge, they also consistently take artificial intelligence (AI) and machine learning (ML) initiatives to accelerate operations, streamline services, and improve customer experiences.

“It’s not rocket science here, but the hardest part is getting a good idea of ​​a customer’s needs,” Mehta said. “How do we actually get a complete customer profile?”

A range of AI initiatives for financial services

While the 170-year-old multinational financial services company competes in an estimated $22.5 trillion industry that represents about a quarter of the global economy, Mehta’s team is advancing efforts in smart content management, robotics and intelligent automation, distributed ledger technology, advanced AI and quantum computing.

Mehta also leads Wells Fargo’s academic and industry research partnerships, including with the Stanford Institute for Human-Centered Artificial Intelligence (HAI), the Stanford Platform Lab and the MIT-IBM Watson Artificial Intelligence Lab.

In its work, Mehta’s team relies on a range of AI and ML tools: traditional statistical models, deep learning networks, and logistic regression testing (used for classification and predictive analytics). They apply a variety of cloud-native platforms, including Google and Azure, as well as homegrown systems (based on data location).

One technique they’re using, Mehta said, is short-term memory. This recurring neural network uses feedback connections that can process individual data points and entire sets of data. His team applies short-term memory in natural language processing (NLP) and understanding spoken language to extract intent from phrasing. One example is complaint management, the extraction of “specific targeted summaries” from complaints to determine the best course of action and respond quickly, Mehta explained. NLP techniques are also applied to form requests from websites that have more context than those in drop-down menu suggestions.

Traditional deep learning techniques such as feedforward neural networks – where information only advances in one loop – are applied for basic image and character recognition. Meanwhile, deep learning techniques such as convolutional neural networks — specifically designed to process pixel data — are being used to analyze documents, Mehta said.

The latter helps to prove certain aspects of submitted scanned documents and to analyze images in those documents to ensure that they are complete and contain the expected features, content and comments. (For example, in a specific type of document, such as an account statement, six attributes are expected based on input provided, but only four are detected, flagging the document for attention.) All in all, this helps streamline and simplify various processes. accelerate, Mehta said.

For upcoming initiatives, the team is also leveraging the serverless computing service AWS Lamba and applying neural network models of transformers — which are used to process sequential data, including natural language text, genome sequences, audio signals, and time series data. Mehta also plans to increasingly incorporate random forest ML pipelines, a supervised learning technique that uses multiple decision trees for classification, regression, and other tasks.

“This is an area that will help most financial institutions move forward,” Mehta said.

Optimize, accelerate, in the midst of regulation

A key challenge facing Mehta and his team is accelerating the deployment of AI and ML in a highly regulated industry.

“If you work in an unregulated industry, the time it takes to have a dataset of features and then build a model on it and put it into production is relatively short,” Mehta says.

Whereas in a regulated industry each stage requires external risk assessment and internal validation.

“We lean more on statistical models when we can,” Mehta said, “and when we develop large neural network-based solutions, this is thoroughly explored.”

He said three independent groups assess and challenge models — a first-line independent risk group, a model risk management group, and an audit group. These groups build separate models to create independent data sources; apply post hoc processes to analyze the results of experimental data; validate that datasets and models are at “the correct range”; and apply techniques to challenge them.

Mehta’s team deploys an average of 50 to 60 models per year, always keeping in mind the champion challenger’s framework. This includes continuously monitoring and comparing multiple competing strategies in a production environment and evaluating their performance over time. The technique helps to determine which model produces the best results (the “champion”) and the second option (the “challenger”).

The company always has something in production, Mehta said, but the goal is to continuously reduce production time. His department has already made progress in that regard, reducing the AI ​​modeling process – discovery to market – from over 50 weeks to 20 weeks.

It’s a question of “How can you optimize that entire end-to-end flow and automate it as much as possible?” said Mehta. “It’s not about a specific AI model. It’s generally like, ‘How much muscle memory do we have to market these things and add value?’”

He added that “the value of ML will be specifically around use cases that we haven’t even thought of yet.”

Encouraging dialogue in financial services

As a whole, the industry will also greatly benefit from bridging the digital expanse between large and small players. Collaboration, Mehta said, can help advance “intelligent insights” and take the industry to the next level of interacting with customers.

This can be achieved, he said, through capabilities like secure multiparty computation and zero-knowledge proof platforms — which don’t exist in the industry today, Mehta said.

Multi-party secure computing is a cryptographic process that distributes calculations across multiple parties, but keeps the input private and doesn’t allow individual parties to see other parties’ data. Similarly, cryptographic zero-knowledge proofing is a method that allows one party to prove to another that a particular claim is indeed true, but avoids revealing additional (potentially sensitive) information.

By building out such capabilities, institutions can securely collaborate and share information without privacy or data loss issues, while at the same time competing appropriately in an ecosystem, Mehta said.

He predicted that within five years the industry will have a stronger hypothesis about collaboration and the use of such advanced tools.

Likewise, Wells Fargo maintains an ongoing dialogue with regulators. As a positive sign, Mehta has recently received external requests from regulators about AI/ML processes and techniques – something that was rare, if ever, in the past. This could be critical, as institutions are “quite heterogeneous” in their use of model-building tools, and the process “could be more industrialized,” Mehta noted.

“I think the regulators have a lot more motivation, interest and willingness to understand this a little bit better so they can think about this and be more involved with it,” Mehta said. “This is evolving quickly and they need to evolve with it.”

Leave a Comment

Your email address will not be published. Required fields are marked *