AI’s transformative potential is reshaping finance, yet its pitfalls, like AI hallucinations, pose significant risks. This blog delves into how data leaders at capital markets firms can harness AI’s power while ensuring accuracy and reliability.
“Doubt is the origin of wisdom,” said René Descartes, one of the founding fathers of modern philosophy. This quote may be approaching 400 years old, but as more of us turn to generative AI (GenAI) to boost productivity and quickly find answers, we would be ‘wise’ to still maintain a healthy sense of ‘doubt’ regarding the technology’s output.
AI hallucinations—when AI produces inaccurate or misleading responses not grounded in data—are a recurring issue in generative AI. While traditional machine learning or deep learning systems can also produce errors, their inaccuracies tend to be more controlled. These systems are typically overseen by expert specialists who are trained to identify and correct mistakes.
Research from AI startup Vectara suggests that current GenAI models hallucinate anywhere from 3% to 27% of the time. These hallucinations can often sound dangerously plausible, sometimes blending true and false information.
Nonetheless, AI holds a powerful allure in today’s capital markets. High-performance data analytics platforms increasingly leverage AI and machine learning to help quants and traders interrogate petabytes of streaming and historical data for hard-to-see patterns and other actionable insights. In a recent survey by Mercer, 91% of investment managers said they’re using or plan to use AI in their investment processes.
Increasingly, the financial sector will also harness GenAI capabilities. In 2023, KPMG found that 40% of senior finance professionals in large companies now view GenAI as a priority. Indeed, AI start-up Bridgewise was just given regulatory approval to provide chatbot-powered investment advice to customers of Israel Discount Bank. Such use cases will only accelerate, given estimates suggesting that GenAI can offer the banking sector $200-340 billion in annual value.
The question is, how can you leverage the enormous value of AI at speed and scale in capital markets when wrong answers are no laughing matter?
Read on as we explore how to seize the AI opportunity while mitigating the risk of hallucinations.
Imaginary answers, real costs
The causes of AI hallucinations are many and varied, from flawed data, poor prompts, and model design issues, to ‘overfitting’—when random noise in training data is seen as meaningful and prevents good performance with new information.
According to McKinsey & Co, 63% of leaders see inaccuracy as the biggest risk of GenAI, ahead of other major concerns like privacy, cybersecurity, or intellectual property infringement. 23% even said that inaccuracies had already created problems for their business.
With AI adoption happening at breakneck speed and its capabilities being embedded in varied business functions, hallucinations can pose a wide range of risks. Here are a few examples:
Misinformed decisions
Advances in GenAI are enabling ever-faster data analytics, as well as the ability to ingest and process bigger and more varied alternative datasets—like video, audio and documents—for richer insights.
While making faster and more accurate decisions based on uncovering hidden patterns in an enormous volume of data offers big advantages, false or misleading insights can lead to flawed strategies.
Imagine your AI-driven risk management system misinterprets market data, leading to misjudgments on interest rate forecasts or credit assessments. Financial institutions using AI for trading decisions must be especially vigilant, as hallucinations can directly affect trade execution and compliance with investment mandates.
Additionally, while companies that face unreliable insights due to AI hallucinations may pursue flawed ideas, competitors with more robust AI safeguards will gain greater advantages in today’s capital markets.
Regulatory fines
With AI development and adoption moving so fast, hallucinations can cause costly regulatory breaches—especially within heavily scrutinised industries like finance. Under regulations like the EU’s MiFID II or AI Act, a hallucination that causes inaccurate real-time reporting, insider trading, or a misstep in algorithmic execution could come at a high price.
Lost trust
As leaders, employees, customers, and other stakeholders come to rely more and more on AI capabilities, a significant hallucination can also severely damage trust in the technology. Internally, this eroded trust might delay or halt your business’s AI journey, while any external impact could also dent your revenue, brand perception, or customer loyalty.
Imagine using a large language model (LLM) to summarize 100 corporate reports, only to later find it hallucinated crucial conclusions on long-term profitability. Similarly, hallucinations that lead to chatbots offering erroneous market advice can shatter investor confidence and cause irreparable damage to a firm’s reputation.
Compromised data integrity
Data quality is the number one issue that keeps AI researchers up at night. Feed AI the wrong data and it will give you the wrong predictions. This can be a particular challenge when training an AI to assess the risk of rare events like a once-in-a-generation market crash. These events just don’t occur often enough to provide a sufficient pool of training data.
You might turn to synthetic data instead, but as The New York Times recently discussed, systems can also face ‘model collapse’ when they repeatedly ingest AI-generated content. If unnoticed hallucinations create inaccurate results that are then fed back into an AI, this flawed data can at first weaken and then destroy its capabilities. Imagine hallucinations that distort financial data models or forecasting tools, leading to inaccurate assessments of market trends or future returns.
This isn’t just an issue when it comes to alpha generation either, as the AI systems themselves cost an enormous amount to deploy, train and operate.
Five ways to combat hallucinations
With AI hallucinations seeing regular media coverage, you need to reassure your colleagues that the dangers can be mitigated, or risk failing to leverage the huge competitive advantage these technologies offer.
McKinsey & Co reports that 38% of leaders are already actively working to offset the risk of AI inaccuracy. While you can’t prevent hallucinations, you can prevent them from causing business issues with a system of safeguards.
Here are several ways to help your business confidently capitalize on AI.
Clean, complete, and consistent training data
Train your AI models on high-quality, relevant, and varied datasets. Low-quality inputs create low-quality outputs, so validate, normalize, and ensure the integrity of your data. For instance, remove errors or biases and use approaches like optimal vector databasing to ensure well-structured information that accurately captures context and relationships. Consistently updating your AI model with new data can also reduce inaccuracies.
Guardrail technologies
Embedding your use of AI in a larger system of software tools can also mitigate the risk of hallucinations by enabling more transparency around decision-making or by verifying that responses are accurate and consistent.
For instance, retrieval-augmented generation lets GenAI cross-reference outputs with trusted knowledge hubs, while custom APIs can also enable connections to approved sources of content.
Clear, specific, and detailed prompts
Prompts that make clear what is wanted and unwanted, suggest relevant sources of information, or put limits on responses, can all help minimize the risk of hallucinations. When GenAI tries to fill in the blanks, you’re more likely to encounter misleading or false outputs.
Data templates that add a clear structure to both prompts and responses can be helpful in this regard. Also, use the right tool for the job — don’t expect an LLM to do complex math, for instance.
Ongoing optimization
Alongside continuously feeding AI models the latest data to avoid outdated responses, it’s also a good idea to keep testing and evaluating their performance over time. This not only helps you identify situations that are more likely to create hallucinations, but also enables proactive retraining to address specific issues and optimize performance.
Human review
While AI is a powerful tool, human oversight remains essential to catch hallucinations. Keep people in the loop and ensure they’re trained to review AI outputs and cross-reference with expert sources to validate claims before they inform decisions or actions.
Your teams should be able to evaluate errors by level of risk and push for underlying database inaccuracies to be corrected. Ultimately, this provides an additional layer of protection and helps your AI become more reliable over time.
Leverage AI with confidence
While hallucinations pose challenges, inaction on the AI opportunity is a business risk in itself. You can’t afford to be left behind by competitors that navigate a better path between innovation and caution. EY found that, of those leaders investing more than 5% of their annual budget into AI, over 70% reported advantages in productivity and competitiveness.
Fortunately, with a strong governance framework and the right approach to mitigation, your business can confidently and reliably leverage AI for maximum advantage in capital markets. Begin by evaluating different models and your acceptable level of risk as you select or build an AI-powered data analytics stack, or start your journey towards an AI factory.
Organizations may even begin exploiting AI hallucinations for serendipitous insights in the future—helping to spark creative ideas or exposing new and unexpected data connections in financial markets.
Over time, advancing technology, better models and new tools will also undoubtedly reduce the risk from AI hallucinations. Until then, we’ll need to keep balancing innovative artificial intelligence with old-fashioned human wisdom.
For more information on what it takes to build a successful AI infrastructure, read our AI factory 101 series. To learn more about incorporating AI into your analytics, visit our KDB.AI product page. And if you’re keen to get hands-on with our tech and see it in action, book a demo.