<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=3618876&amp;fmt=gif">

The capability of AI is evolving at a blistering pace. People are finding creative and low-cost ways to accomplish tasks that seemed far-fetched for any computer system even a year ago. There’s no question AI can now do some amazing things. At its core, “AI” is a set of data-driven techniques to solve problems in a predictive manner. It can help automate specific tasks and helps answer questions, including some tough ones. The bigger and more difficult questions are, “What can it do for you?” and “How will you implement artificial intelligence at your financial institution?”

 

As the CEO of a company that deploys AI to augment and support community financial institutions, I believe there are seven questions that financial institution leaders should ask before jumping into the swirl of AI tech.

 

1. What problems do you want to solve?

 

I see many people start with saying “let’s find a way to deploy AI.” But you should really start by identifying a problem that is suitable for an AI-based solution. Look for repetitive, mundane, and expensive problems. Rate them based on what fraction of the problem you can solve for the relevant domain and how much value an automated solution would bring your organization. 

 

If a problem has lots of data associated with it, or is highly repetitive, and would deliver lots of value, it should move to the top of the list. Customer support requests are a typical example. Ideally, your institution has stored them in a database, and you’ve measured how much time your team spends addressing them each month. This gives you a baseline measurement so that you can monitor performance and set realistic expectations. 

 

2. Who would benefit from the solution?

 

Solving a problem for an account holder looks very different from solving it for your organization or a partner organization. Financial institutions that have already implemented AI tools mostly use them behind the scenes. Detecting fraud and defending against cyber threats are great examples. Most account holders will never see those systems in action. 

 

Beware of solving one problem and creating another. From the financial institution's perspective, less call center traffic is a win, but for consumers who experience frustration due to the inability to find a solution for their problem, it’s a loss. Regardless of who benefits from the solution, consider the user experience for everyone it touches.

 

3. Is your data ready for processing by AI?

 

Financial institutions are custodians of vast troves of data. But it isn’t always tidy and stored in an accessible format. Limited API accessibility could be a deal-breaker for integrating AI tools into your workflow. You also need to consider the costs (for your core provider and others) of data access.  

 

Your data will never be perfect, so you need to think through how to handle any discrepancies in the data. Data hygiene is an ongoing practice, not a one-time task. Ask your IT team to audit your data ecosystem and estimate API coverage of the underlying data.

 

4. How will you assure the consumer-facing aspects feel beneficial to them?

 

A great way to answer this question is to establish a small-scope test for the AI. This allows you to integrate with the vendor, get your staff up to speed, and monitor the tool's progress. You can test the user experience for yourself and troubleshoot problems while they’re small. It also lets you assist consumers with the new process because you’ve been in their shoes. This is especially important if you’re using AI to assist with account holder communications and engagement.

 

5. How will you monitor the AI and check for any issues?

 

It’s nice to think that computers and machines don’t make mistakes, but they do and will. You need a process to monitor the AI and log activity. Periodically you need to review and validate that the tool is operating according to the parameters you set. 

 

In the same way you would pair a new employee with a more experienced employee, the AI must be trained and reinforced to follow the correct protocols. AI is a dynamic, changing system and should be treated like that in advance of deployment and during deployment. This is critical from a liability standpoint as well. The FTC expects financial institutions using AI to operate with the same level of transparency and disclosure as always.

 

6. How will you ensure that regulators will approve of your implementation?

 

This question should stimulate you to examine and document your decision process. If you’ve followed the same level of due diligence and care as your other technologies and processes, then your chances are far better. The framework you’re already using for compliance is a good starting point for working with AI. Just keep in mind that the regulations around AI will be a moving target for some time. The best thing you can do is follow the principles and methods that both you and the regulators are already familiar with.

 

Activity logging and leaving an audit trail are two must-haves if you’re going to be able to prove to regulators that your AI toolset is operating in a compliant way. It also gives your team an avenue to flag and fix errors before any issues arise.

 

7. How will you measure success?

 

Keep in mind that marketing hype sometimes leads to unrealistic expectations of what is possible. Hence it is important to calibrate your expectations and communicate them to intended users. Ultimately, the focus should be on the utility AI provides and not on the exact approach/implementation.

 

This confusion is made worse by marketing hype. The technical particulars of how an algorithm differs from machine learning and how machine learning differs from AI aren’t something most people can be expected to care deeply about. However, lots of companies are feeling pressure to include AI as part of their products or services to appear relevant to their customers. The reality may be very different from the perception. Of course, customers don’t care if their problem is solved by an algorithm or an AI as long as it’s genuinely solved. 

 

Work with your AI vendor to establish clear, realistic metrics for success and then stay accountable to them. Ask for case studies from similar institutions and use those examples to calibrate your own goals. Make sure that you’re choosing metrics that you’ve already been tracking for a while. Without that historical data, it will be challenging to quantify any improvement. If there’s a metric you want to track, but don’t have historical data for, consult with your AI vendor to begin tracking it and revisit once you’ve collected a meaningful sample. 

 

Augmented intelligence beats artificial intelligence every day.

It is tempting to get sucked into the hype around AI in banking. You should evaluate AI vendors with the same scrutiny you apply to every vendor or partner. While technology is advancing rapidly, perfection is still elusive. I say that as someone who has been actively developing foundational AI technology for nearly three decades. 

 

My vision, personally and professionally, is not for AI to replace humans, but to enable people to use AI as a tool to augment themselves, allowing them to continue the incredible things they do well, but also make their tasks easier and more efficient. In my mind, it will be a very long time before AI could ever fully replace humans.

Share