3 Steps To Make Your Generative AI Transformative Use Cases Sticky

25 10月 2023 | 6 minutes

 

The Gartner® Assess the Value and Cost of Generative AI With New Investment Criteria report provides a decision framework for assessing and realizing value from enterprise generative AI initiatives. Organizations need to architect transformative use cases that deliver competitive advantages amidst industry disruption from new GenAI products and business models. I found the report’s accompanying background strategic planning assumptions striking. According to Gartner:

  • By 2025, growth in 90% of enterprise deployments of GenAI will slow as costs exceed value, resulting in accelerated adoption of smaller custom models.
  • By 2026, more than 80% of independent software vendors (ISVs) will have embedded generative AI capabilities in their enterprise applications, up from less than 1% today.
  • By 2028, more than 50% of enterprises that have built their own large models from scratch will abandon their efforts due to costs, complexity and technical debt in their deployments.

For the remainder of this blog, I’ll give my perspective on the techniques and technologies that can help you reach those right transformative use cases and highlight the role of the vector database.

Leveling the Playing Field Through Generative AI

First, some context about the democratic opportunity for all. Generative AI is a great disruptive leveler. All firms and sectors can stand on the shoulders of giants to be successful. Past leaders in AI exploitation may not be the future leaders. They who dare, win.

From support and marketing chatbots to automated customer emails, most organizations have AI projects under way. Perceived industry leaders in big tech, high tech, and capital markets have long used structured data and “discriminative AI” to observe, predict, and find anomalies. They have serviced use cases such as detecting fraudulent transactions or trades, predicting equipment failures of key assets (defense, space, manufacturing, connected vehicles), personalizing social media recommendations, or dispensing robo-advice.

Some sectors may appear to be less out there, e.g. consumer, health, and insurance, but if you take this view don’t be complacent. Many actuaries claim they were the original data scientists and predictive modelers, and they have a point. Reinsurance stacks are every bit as sophisticated as the smartest hedge funds, servicing more challenging pricing and risk regimes. I know, I’ve worked with both. Medical devices have long been a focus of AI, while predictive healthcare has already affected my health, in a good way.

Whatever your starting point, I offer three tips for successfully implementing differentiating and transformative use cases soonest, with the least technical debt:

1. Prepare data well – It’s not enough to have data; it needs to be in the right format, organized, and engineered quickly. Make sure your tooling can expedite essential data engineering operations efficiently, for example, join disparate data-sets and perform filtering and aggregation.

2. Select the Large Language Model (LLM) that’s right for you. There’s no right answer. “Open source” (e.g. LLaMA, Falcon) versus proprietary (e.g. GPT, Bard) gets debated online. We at KX strive to give you the option of working with different LLMs, as we do other LLM-neutral developer tooling such as LangChain. AWS’s Eduardo Ordax’s guidance on LLMOps and what he calls FMOps (Foundational Model Ops) is also helpful:

  1. Resource appropriately for providers, fine-tuners, and consumers. Lifecycle demands differ.
  2. Adapt a foundational model to a specific context. Consider different aspects of prompting, open source vs. proprietary models, and latency, cost, and precision.
  3. Evaluate and monitor fine-tuned models differently. With LLMs, consider different fine-tuning techniques, RLHF, and cover all aspects of bias, toxicity, IP, and privacy.
  4. Know computation requirements of your models. ChatGPT uses an estimated 500ml of water for every five to 50 prompts. Falcon 180B was trained on a staggering 4096 GPUs over 7M GPU hours employing huge computational resources – computation requirements matter. OK you’re unlikely to train Falcon, but if you use it or anything else, know what you’re consuming.

3. Determine your optimal “taker, shaper, or maker” profile

  • #taker - uses publicly available models, e.g. general-purpose customer service chatbot with prompt engineering and text chat only via interface or API, with little or no customization.
  • #shaper - integrates internal data and systems for customisation e.g. get data from HCM, CRM or ERP into LLM with fine-tuning of company data.
  • #maker - builds large proprietary models. To my mind, only specialists will adopt this trajectory. As previously mentioned, according to Gartner, “By 2028 more than 50% of enterprises that have built their own large models from scratch will abandon their efforts due to costs, complexity and technical debt in their deployments.”

The Role of Vector Databases

Until now, I’ve not referenced a vector database. A vector database is a shaper technology that I believe offers the greatest opportunity to implement your golden use cases. Think of a vector database as an auxiliary to an LLM. A vector database helps you quickly find the most similar embeddings to a given embedding with customization. For example, if you’ve embedded the text of a search query and want to find the top 10 documents from your organization’s legal archive most similar to the query in order to assess the new case.

Vector databases should therefore help you incorporate contextual understanding with your own data into the search. In short, use vector embeddings of your own datasets to augment queries to your favorite LLM with supporting validated information. Then, query and search for contextualized differentiating and transformational use cases relevant to your organization, reducing risks of hallucinations.

Not all vector databases are the same. Some are more flexible with datatypes allowing combinations of generative and discriminative AI. KDB.AI is one such database. Make sure your vector database is efficient. Think of the environmental and financial costs of your search. Ensure your database matches your business needs. If your organization requires searches with to-the-minute data sets, choose one which can work with edge workflows. Choose wisely.

No need for most organizations to “make,” but stand on the shoulders of giants to “shape” for optimal impact and YOUR golden use cases.
Download the Gartner ‘Assess the Value and Cost of Generative AI With New Investment Criteria’ report.

Alternatively, visit KDB.AI to learn more about vector databases.

 

Gartner, Assess the Value and Cost of Generative AI With New Investment Criteria, Rita Sallam, James Plath, Adnan Zijadic, Pri Rathnayake, Graham Waller, Matt Cain, Andrew Frank, Bern Elliot, 13 July 2023. GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

 

This KX article is by Steve Wilcockson

 

RELATED RESOURCES:

 

投稿が見つかりませんでした。

 

Start your journey to becoming an AI-first Enterprise with a personal demo.

Our team can help you to:

エラー: コンタクトフォームが見つかりません。