We have made changes to our privacy policy.

What is your Data’s Half-Life?
3 Lessons from the AI and Data Science in Trading Event

20 April 2021 | 4 minutes

by Manus McGuire

The transformation of the buy-side industry to machine learning and artificial intelligence was front and center at the AI and Data Science in Trading 2021 event. Virtual events are difficult to get right, so hats off to the organizers who ably recreated the in-person conference experience with an interactive platform and a rich variety of sessions for participants to choose from.

Three themes that resonated were the renewed importance of data processing, increased cloud adoption, and the growing value of microservices.


According to the panel, “Building an infrastructural data strategy”, during recent transformative years, less attention was paid to existing data pipelines and how they functioned relative to the data’s half-life – the period of time from which actionable insights can be derived – essentially how quickly traders can extract value from data once it’s been obtained.

Panelists discussed how previously a huge effort had been placed on trying to connect a seemingly endless ‘noodle soup’ of data providers, order management systems and execution platforms, etc. without putting enough emphasis on the turnaround time from ingestion to value. Consequently, asset managers who made shrewd infrastructure and hardware decisions were quick to point out how this helped accelerate machine learning as a powerful tool for building financial theories in their firms.

There were reasons for this infrastructure investment; the explosion in data providers led the way for a “gold rush” in the market, as modern traders attempted to derive signals using machine learning models leveraging numerous new, varied, and often complex data sources.  Panelists acknowledged, what was not anticipated by this explosion, was the importance of the time taken from data ingestion -> readiness -> to value and how that corresponded to the half-life of the data. The firms that i) recognized this early, and ii) took steps to shorten this timeframe by optimizing their data pipelines, now have a competitive edge.


Over the last decade, the cloud has been an enchanting and interesting trading panel headline topic. However, as evidenced by this conference, the tone has now changed for buy-side participants  – purposefully. Previous forecasts by panelists, setting up of cloud working groups, etc. have been replaced by much more definitive analyses and strategic, retrospective language from portfolio managers and directors with now seasoned, cloud experience.

Generally agreed amongst panelists and participants was that cloud-native solutions – when used effectively – can allow for the convergence of previously disparate data silos.

Buy-side trends corroborate the emerging prevalence of cloud-native solutions, though the application of a given strategy – as evidenced by this conference – differs depending on firm size and maturity. As noted on the panel “Effective integration and implementation of cloud services to maintain a competitive advantage”, generally larger firms placed greater emphasis on adopting hybrid multi-cloud solutions, thus reducing their exposure to vendor lock-in (both infrastructure and skillsets), satisfying their compliance teams security requirements, and reducing overall costs.

In contrast, smaller, more agile buy-side firms have relied more so on a single vendor cloud approach. Given the breadth of plug-and-play components offered natively on the respective marketplaces, these smaller firms often opt to leverage the cloud vendors’ infrastructure and ecosystem – allowing them to focus almost exclusively on their own trading strategies.

It should be noted the buy-side is in keeping with other financial services trends KX has seen –  two years ago just 19% of our financial services customers were considering cloud adoption, and now, 50% of them have implementations planned for this calendar year.


Perhaps expectedly, much of the discourse at the conference centered on highlighting challenges and pitfalls when building a data strategy in the modern trading world, such as:

  • Avoid clunky monolithic systems – make sure changes have clear causes and known effects.
  • False data democratization claims – data being “available” is not the same as “usable”.
  • Data gremlins – Data analysis spawns data. It’s very tempting to try ALL the machine learning models, but before you start, ensure you have a clear use case, data pipeline, and storage solution in mind.

Luckily, there is an increasingly frequent type of offering to counteract some of the above. Microservices delivery has become the perfect answer to the non-binary option of build v buy. Microservices offerings from vendors offer an “al-la-carte” selection of services to choose from. Don’t like it? Don’t change everything, just change that!

Noticeably, this conference highlighted how the rate of change within modern trading methods – ML/ AI or otherwise – has amplified a core systematic acknowledgment; to derive value from data within its short half-life, a focus on systems with effective interoperability, fault-tolerance, and higher performance is required.

Demo kdb, the fastest time-series data analytics engine in the cloud

    For information on how we collect and use your data, please see our privacy notice. By clicking “Download Now” you understand and accept the terms of the License Agreement and the Acceptable Use Policy.