<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=3618876&amp;fmt=gif">

The Turing Test, for all its strengths and weaknesses, is fundamentally a test of deception: can a computer fool a human into thinking it’s also a human by answering questions? I see companies that are keen to implement chatbots that could pass this test — fooling customers into believing they’re engaging with a warm human, not a cold line of code. The artificial intelligence community has wildly differing opinions on the utility of the Turing Test. I’m not going to pick a side in this article. However, the Turing Test is useful when thinking about the philosophy of artificial intelligence (AI) and how we implement it in consumer-facing applications.

 

Of course, if it’s cheaper to run a chatbot than it is to pay a customer service representative, that’s a win for the bottom line. The problem with this approach is that it can become a sign of disrespect to consumers. It’s true that ChatGPT is fooling people with its essay writing and command of certain topics. But even in a best-case scenario, fooling people can erode the most precious commodity a financial institution, or any business, can command: trust.

 

I believe that focusing on trust can help financial institutions think about and implement AI in a clearer, more strategic fashion. 

What is generative AI, and why are we building it?

For the purpose of this article, I’m going to focus on a sub-category of artificial intelligence known as generative AI. Generative AI is a computer system capable of taking input, such as a text-based prompt, processing it through a large data-set-based model, and creating something new based on that prompt. ChatGPT is a generative AI model that produces written text from a user’s text prompt. In contrast, the AI model that Tesla uses to power the various “driver assistance” or “self-driving” modes in its cars uses deep learning to digest vast troves of image and sensor data and decide the next best action for the vehicle to take.

 

Machine learning has been around for a long time. Deep learning is a sub-discipline of machine learning that has achieved popularity because it delivers significantly better results. In fact, most people have benefited from machine learning or a deep learning model without knowing it.

 

While science fiction promotes the notion of an AI system more powerful and smart than any single human, the reality is more complex.

 

Generative AI or large language models such as Chat GPT are consumer-focused tools that give compelling and worthwhile answers to general-purpose prompts. When considering applications for business use cases, such as financial technology, it is important to consider the tradeoff of perils and promises of the underlying technology.

 

The perils of generative AI

I’m starting with the perils because they’re critical if we’re going to understand the potential benefits of generative AI fully. It’s amusing to speculate on the utopian and dystopian futures that AI might create, but those aren’t helpful when determining how your financial institution should think about and use AI in your daily consumer-facing roles. 

 

There are AI tools that you could implement today. Let’s take a look at the risks associated with them.

1.  Confusion

Attempts to create human-to-AI interfaces, such as chatbots, often leave users feeling confused. Will the chatbot understand their questions? Even if the chatbot helps solve the user’s request, it may still result in an “uncanny” interaction that appears human but doesn’t feel convincing. Users also suspect that they’re being boxed in — prevented from speaking to a human. Interactions like that leave negative impressions and can even lead to attrition if it’s severe. 

 

Here’s an example: An account holder visits your website and engages with a chatbot to ask about having an overdraft fee refunded. The chatbot doesn’t understand the question and asks the user to repeat it a different way or responds by linking to the overdraft protection policy. The user ends up feeling like they need to guess the right phrasing, or they just type “speak to a real person” until the chatbot relents. 

 

In this scenario, the account holder walks away feeling ignorant or bothered. Effective artificial intelligence shouldn’t make people feel uncomfortable.

2.  Loss of trust

Confusion isn’t a critical problem by itself. However, confusion leads to a loss of trust when the user questions the nature of their relationship with your institution. This issue can compound due to the perceived confidence of the chatbot in providing a response. Humans make mistakes, but we have the ability to measure and project varying levels of confidence. Current generative AI models, such as ChatGPT, lack this nuance. They can generate incorrect answers paired with apparent certainty. This phenomenon is commonly referred to as a “hallucination.” 

 

I’m not saying that using an AI chatbot at your institution will automatically erode trust with your account holders. I am saying that unless you account for such risks in your implementation of AI tools, you may see attrition rise, even as you see a reduction in customer service requests requiring a human-generated response. 

 

Let’s say an account holder wants to decide if they should refinance their mortgage. This is a hugely complex question that isn’t as simple as getting a better interest rate. It has long-term consequences that depend heavily on what happens with interest rates in the future, the value of the home, and how much is still owed. There are lots of other factors that a mortgage lender would be able to evaluate by asking follow-up questions. The chatbot is unlikely to have the latest context necessary to make an informed, personalized recommendation.

3.  Liability

Who should be held liable when your AI chatbot hallucinates an answer and misleads an account holder? Is it the financial institution, the model provider or somebody else? Financial institutions already have protective frameworks in place to handle such risks for human employees. We are only beginning to explore questions of liability when it comes to AI models. I’m confident that we’ll establish the necessary legal frameworks, but how they will look when the dust settles is beyond my powers of speculation. 

 

What your financial institution should do is reduce exposure to liability through careful implementation, small-scale testing, comprehensive activity logging, third-party verification and using human-in-the-middle verification steps.

 

For this example, I’ll pivot from the chatbot-human interaction: Let’s say you decide to use an AI model to analyze your account holders and identify people who may qualify for a new line of credit. According to guidance released in 2020 by the FTC, your institution must still abide by the FCRA and ECOA and you will need to implement steps to verify that is the case. And you need to be transparent when collecting and using sensitive data, such as the personally identifiable information (PII) your institution already handles daily. 

The promises of generative AI

While the perils above are real, hopefully they don’t scare you away from the promises of what AI can do to grow your business and support your staff. Many institutions are already using AI models to provide personalized banking services, assess risk, mitigate fraud, flag spam, and defend against hackers. These applications increase the integrity of your institution and shore up trust with your account holders.

 

When you think of AI as a replacement for humans, you’re putting unreasonable pressure on it. Instead, try thinking of it as an augmentation for humans, an efficiency tool. Tools like a hammer or electric drill allow humans to accomplish tasks faster, easier, and with fewer errors, and when effectively utilized AI can be your most powerful tool. 

1.  AI can decrease confusion

You can implement an AI model that helps account holders and your staff find the right answers faster and with less managerial intervention. It can also help bridge the knowledge gap that every new employee experiences by using AI to tag conversations and present applicable documents or marketing material that might be helpful in the conversation. This lowers the need for account holders to explain their situation over and over when they reach out for help. It also delivers a level of intimate service that is quintessential: knowing your banker by name. 

2.  AI models can build trust internally and externally

Trust is a function of expectation. Don’t attempt to fool your account holders. A virtual assistant can help them find the answer or connect them with a human as quickly as possible. An AI assistant can augment the capabilities of your team, making them more effective and knowledgeable at their tasks. When a customer support representative responds to a question with “I don’t know the answer to that, but I’ll find out who does”or is able to provide a thorough answer, it’s a huge trust builder. Your team doesn’t need to know every answer, just how to find the right answers quickly — their confidence will grow as they become more familiar with the AI assistant and its capabilities. 

 

3.  Al can help manage your liability

Compliance in a world of artificial intelligence could seem like a nightmare – there are so many unknowns. On the other hand, there are AI tools that can actually reinforce best practices and simplify your reporting burden. 

 

One of the ways our clients use Lynq is to ensure that conversations with account holders always happen on a secure, proprietary channel, even if the banker uses their personal device to communicate. Lynq also allows your team to retain conversation continuity even if an employee leaves the company – all the communication can be easily transferred to a new team member. Everything is logged and easily auditable.

Putting AI tools to work at your institution

As I mentioned at the beginning of this article, there are many types of AI models and many software platforms that use those models in novel ways. The most important question you can ask yourself is, “Where can AI tools help increase team productivity while delivering a top-notch experience to our account holders?” The answer to that question will differ for every institution.

 

Artificial intelligence is not a miracle cure for the challenges your institution faces. However, by consulting with experts in the space, you will uncover opportunities to use AI in ways that upgrade productivity, increase trust, decrease confusion, and offset your compliance burden. 

 

If you’d like to learn more about what AI can do to help your institution and the ways you should avoid using it, check out this article on the seven questions to ask your team before implementing AI.

Share