Products
kdb+
kdb Insights
KDB.AI
Delta
Sensors
Surveillance
Market Partners
Cloud Service
Data Lake
Market Data
Services & Support
Learn
Documentation
KX Academy
Developer Blog
YouTube Channel
Connect
KX Community
Slack Community
StackOverflow
Events
Build
by Enda Gildea
As data volumes continue to increase it poses a significant challenge to ingest, process, persist, and report upon batch updates in a timely manner. One solution to this problem is to use a batch-processing model in kdb+, the requirements and considerations of which will differ from a standard tick architecture.
In the latest of our ongoing series of kdb+ technical white papers published on the KX developer’s site, Senior KX engineer Enda Gildea discusses how a system can ingest a huge amount of batch data through kdb+ efficiently and quickly.
In this whitepaper, Enda discusses what batch processing is and outlines a framework that shows how mass ingestion of data can be done quickly and efficiently through kdb+. The framework aims to optimize I/O, reduce time and memory consumption from re-sorting and maintaining on disk attributes.
Click on this link to read the whitepaper.