Understanding ChatGPT and How It May Impact the Finance Industry

Harriet Christie | Payments Journal

April 21, 2023

As digitalization continues to permeate everyday life, data archiving has become increasingly vital for a variety of reasons. With the emergence of ChatGPT, an artificial intelligence-powered chatbot, the landscape has again shifted dramatically. But what are the implications of this breakthrough, and how will it impact digital archiving?

What is ChatGPT?

ChatGPT is a large language model that gives detailed responses to questions and statements, to a hitherto unseen level of sophistication. Early adopters have marveled at the program’s capabilities, from drafting detailed essays in a matter of moments, to conjuring poetry with unfaltering rhyme schemes, and even writing functional code.

ChatGPT is owned and developed by AI research and deployment company, OpenAI. The organization is based in San Francisco and was founded in 2015 by a who’s who of tech titans including Elon Musk and LinkedIn co-founder Reid Hoffman. The company’s mission statement was to ensure that Artificial General Intelligence (AGI) would benefit all of humanity, and to advance it safely.

Back in 2015, OpenAI President Greg Brockman met with Yoshua Bengio, one of the “founding fathers” of deep learning. They drew up a list of whom they considered the ten best researchers in the field. Brockman ultimately hired nine of them as the first employees in Dec. 2015. Fast forward to 2023, OpenAI employs 375 employees—at the last count.

Is it Convincing?

It’s likely that you’ve tried it out; debating controversial topics, querying the intangibles, ‘testing’ whether or not it can complete a work task for you. One thing becomes clear pretty quickly; infinity is daunting. What should you ask when you can ask anything?

Whatever you do ask, it’s likely that the response will be well informed, logically argued, and promptly delivered. Unreasonable requests for personal advice may be met with a disclaimer, “As an AI language model, I cannot make decisions for you, but I can provide some general reasons why…” Even when you set it up to fail, it provides a calm, clear-headed retort that leaves you feeling decidedly less smug, and in fact rather silly.

What Are the ChatGPT Limitations?

Despite its convincing rhetoric, ChatGPT is, at times, deeply flawed.

Quite simply, its statements can’t always be trusted. This is a reasonably devastating indictment for a tool which invites such vehement scrutiny, and has been acknowledged by OpenAI, who admit that “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

ChatGPT has a vast wealth of knowledge because it was trained on all manner of web content, from books and academic articles to blog posts and Wikipedia entries. Alas, the internet is not a domain renowned for its factual integrity.

Furthermore, ChatGPT doesn’t actually connect to the internet to track down the information it needs to respond. Instead, it simply repeats patterns it has seen in its training data. In other words, ChatGPT arrives at an answer by making a series of guesses, which is part of the reason it can argue wrong answers as if they were completely true, and give different (incorrect) answers to the same questions.

Another major challenge is the potential for the model to generate biased or harmful responses, having learned these biases from its training data. ChatGPT can only ever be as well-balanced as its source material, and with a diverse cocktail of prejudices feeding into the web content that has shaped it, a neutral ‘personality’ seems unlikely.

Is ChatGPT Compliant?

As with many less regulated industries, ChatGPT could help streamline many processes in financial services, from customer service to fraud detection, and even the compliance function itself. However, when large sums of money are involved, there are major implications to its propensity for misinformation.

Following its well-documented issues with another third-party application, WhatsApp, it’s unsurprising that JPMorgan Chase has moved quickly to ban its employees from using ChatGPT amidst privacy concerns. JPMorgan staff were asked not to enter sensitive information into the chatbot, opting instead to “tread carefully” around the technology. After all, ChatGPT makes it clear when you use the program (and in its FAQ) that the information being digested helps to train the bot.

Regulating bodies like the SEC will be monitoring the situation closely, and will have a position on the use of ChatGPT within a firm, to stipulate those parameters for the early adopters. With recordkeeping requirements under the microscope, regulated firms are understandably risk-averse and looking to the regulator for direction.

As Matt Levine explains in his Bloomberg Money Stuff column, “If you want to get advice from a robot about how to invest—or if you want the robot to help you write a presentation for clients—then you had better communicate with the robot using official channels! Typing in the ChatGPT box isn’t an official channel, so it’s not allowed.”

Moment of Truth

As ChatGPT’s limitations are now well established, it would be reasonable to wonder whether it can effectively serve any purpose at all. After all, when conducting research, the only thing less useful than a blatant lie is perhaps a convincing one.

While ChatGPT isn’t a credible source, that doesn’t render it worthless. Take marketing; an industry centered around the regular creation of informative, assertive content. When deadlines are tight and brainpower is low, asking the chatbot’s thoughts on a particular topic could provide the elusive spark that kickstarts the creative process.

The chatbot is better suited to provide inspiration rather than education, and while some fact-checking may be needed, that’s certainly more efficient and less daunting for many than the ominous blank page…READ MORE

Skip to content