Kx Whitepaper – Mass ingestion through data loaders

12 Dec 2019 |
Share on:

by Enda Gildea

As data volumes continue to increase it poses a significant challenge to ingest, process, persist, and report upon batch updates in a timely manner. One solution to this problem is to use a batch-processing model in kdb+, the requirements and considerations of which will differ from a standard tick architecture.

In the latest of our ongoing series of kdb+ technical white papers published on the Kx developer’s site, Senior Kx engineer Enda Gildea discusses how a system can ingest a huge amount of batch data through kdb+ efficiently and quickly.

In this whitepaper, Enda discusses what batch processing is and outlines a framework that shows how mass ingestion of data can be done quickly and efficiently through kdb+. The framework aims to optimize I/O, reduce time and memory consumption from re-sorting and maintaining on disk attributes.

Click on this link to read the whitepaper.

SUGGESTED ARTICLES

Dovetailing Software – kdb+ and Spark

9 Jan 2020 |

Hugh Hyndman outlines how the storage capabilities of kdb+ and the distributed clustering capabilities of Apache Spark can be cleverly combined, using their shared columnar design, to achieve higher throughput and performance than via a row-based JDBC intermediary.

PostgreSQL to kdb+ Extension

3 Dec 2019 |

Hugh Hyndman outlines a PostgreSQL extension that allows a Postgres process to access kdb+ data via an SQL interface thereby enabling the two technologies to combine their respective strengths