Navigating AI skepticism: A path forward for capital markets 

Ryan SieglerKX Data Scientist
5 December 2024 | 8 minutes

With AI, there’s a big difference between ordering a taco and generating a trading strategy.  

While Generative AI (GenAI) might already be suggesting your next fast-food order, the risks are far more significant in high-stakes, heavily regulated industries like capital markets—where millions of dollars and reputations are on the line. Unsurprisingly, AI adoption here comes with considerable skepticism and scrutiny.  

McKinsey research reflects this hesitation: while 65% of businesses adopted GenAI in 2024, only 8% of financial services leaders reported using it regularly—a number unchanged from the year before. Concerns like inaccuracy (63%), compliance risks (45%), and lack of explainability (40%) are holding organizations back.  

In this blog, I’ll explore the root causes of AI skepticism and examine how organizations can overcome these barriers. By addressing common concerns, we’ll show how AI can provide value safely and compliantly—even in the most complex environment.  

Four causes of AI skepticism  

1. Reliability and accuracy  

AI offers unmatched convenience, delivering information quickly and efficiently. But that information isn’t always accurate. Hallucinations can be a major problem. A large language model (LLM) is always confident in presenting information but may make it up when it doesn’t know something.  

If you’re not an expert in the subject matter, it’s hard to know what’s true and what’s false, which makes using AI a risk. Even if you are an expert, having a clean output riddled with errors adds to your load. No wonder there’s hesitancy from people and businesses to adopt AI, especially in spaces like finance and healthcare, where an extremely high degree of accuracy is vital when making decisions.  

2. Privacy  

Privacy is an ongoing concern with tech in general, so it should be no surprise that it increases skepticism in AI. And I’d argue that some types of data, such as personally identifiable information (PII), shouldn’t necessarily be presented to certain models.  

When you have an AI model hosted on the cloud, the risk of data leakage is introduced. And you must consider the provider because you’ll rely on it to keep your data private. We’ve seen how cavalier certain companies are with information their AIs gulp down, at the very least using it for training purposes. 

3. Explainability  

There are problems with explainability when it comes to today’s AI models. The inner workings of many models remain opaque, often leaving users unsure of the reasoning behind outputs. You might want to understand how and why decisions are being made or how a model produced a particular answer. But most models are akin to a closed box.  

In certain industries, that’s not going to work. With capital markets, regulations demand accountability. Companies need to be able to say how they came to a trading decision. Everything has to be auditable. Without that, we see a cautious approach to AI – or even outright rejection – from those markets.  

4. Bias  

When you use AI, you must be sure the models you work with aren’t biased. However, models reflect the biases of their creators and training data sets, skewing data and heavily impacting output. When that leads to unfair outcomes in something your business has deployed, there’s a chance your reputation will take a hit.  

Of course, bias won’t always be a consideration, as illustrated by my ordering a taco versus defining a trading strategy. If I’m at a fast-food drive-through and an AI takes my order, I’m not thinking about bias at all. But if I’m building an application that influences trading decisions, I need to be absolutely confident in the fairness and integrity of the model.  

So, we’re up to four drivers of skepticism now. Let’s switch things up to look at ways to address concerns, build trust, and bring people around on AI.  

Overcoming AI skepticism: How to address concerns about AI  

1. Education   

With almost any aspect of business, it’s essential to understand more about it – doubly so if you’re apprehensive. That’s especially true of AI. Before forming conclusions, it’s essential to understand the underlying mechanisms of AI models. Dig into how LLMs and GenAI work. Explore real-world benefits and positive – but hype-free – stories that apply to your industry.   

The more you know about something, the less scary it becomes. And even if AI isn’t a perfect fit for your core business, you might discover it can benefit other departments and projects in ways you’d not initially thought of.   

2. Keep humans in the loop  

AI is a powerful tool, but it’s not a magic bullet. You need to keep people in the loop – in every way. Inform employees about AI deployments, along with how and why you made them. Educate them that AI is about scaling and efficiency, not replacing people.  

As I’ve said before in my blog ‘Harnessing multi-agent AI frameworks to bridge structured and unstructured data,’ AI can be considered an army of interns, helping you gather information, analyze data and make recommendations. But the final call will be down to people. You are not ceding control of the services and vital critical operations to AI. Instead, aim to use AI strategically to help enhance your team’s capabilities and make better decisions faster.  

3. Work with the right teams  

In FSI and other industries, companies tend to bring in new tech through IT departments, who are naturally skeptical about such things. And if the tech involves data, legal and procurement will also likely get involved. Negative feedback can prevent you from taking things further, but there are ways to mitigate this.  

Work directly with – or at least talk to – teams that will actually use the AI tool. They can be your champions, enthusing about the benefits, teaching you how AI can benefit your company, and potentially helping you navigate challenges. And when looking to deploy, de-risk by getting risk management teams in earlier to make outcomes more effective. This will help you raise and solve regulatory and reputational questions, and build client trust.  

4. Start small – but think big  

Finally, you don’t have to jump in at the deep end with AI. You can start by testing the waters in secure proof-of-concept environments without making production decisions. You’ll be able to learn and prepare to implement AI later when there’s more acceptance or when you’re ready. But you’ll do so without risk, making production decisions, or touching personal data.  

With such projects in action, you might start winning people around, gaining trust and acceptance within your organization. More importantly, you’ll have a starting point on which to build. That’s important, because when things improve in AI, it tends to happen quickly.   

Where to Start

A great place to begin AI adoption is with quantitative research. Quantitative research is inherently data-driven, requiring the analysis of large scale structured and unstructured data. This is a natural fit for AI tools like KDB.AI, which excels at processing vast amounts of information and uncovering insights that might be missed by traditional methods.

Start by exploring areas where AI can complement existing quantitative techniques, for example in finding and extracting structured information from financial reports and news articles, identifying patterns in time-series data, and improving risk assessment accuracy. Using AI for these tasks allows researchers to focus on higher-level decision making, refining hypotheses, and crafting innovative trading strategies.

Moreover, quantitative research is an ideal proving ground for AI because it operates in a controlled environment with experienced humans in the loop. By starting here, firms can build confidence in the technology while addressing key concerns around accuracy, explainability, and compliance. Lessons learned in quantitative research can then be extended to other areas of the business, paving the way for broader AI adoption.

Remember, the goal isn’t to replace analysts and researchers but to augment and enhance their capabilities. With AI as a tool, teams can uncover deeper insights, innovate faster, and gain a competitive edge.

So don’t let AI skepticism hold you back. Be curious, explore the opportunities, and you’ll be in a much better place when the time comes than those organizations that haven’t innovated fast enough.  

For more information on what it takes to build a successful AI program, read our AI factory 101 series. Discover why KDB.AI is a crucial part of the AI factory. Learn more at the KDB.AI learning hub and check out our session on using agents to maximize your LLMs.   

Start your journey to becoming an AI-first Enterprise with a personal demo.

Our team can help you to:









    For information on how we collect and use your data, please see our privacy notice. By clicking “Download Now” you understand and accept the terms of the License Agreement and the Acceptable Use Policy.