Kx is a suite of enterprise-level products and solutions centered around kdb+, the world’s fastest time series database. Kdb+ is optimized for ingesting, analyzing, and storing massive amounts of structured data. The combination of the columnar design of kdb+ and its in-memory capabilities means it offers greater speed and efficiency than typical relational databases. Its native support for time-series operations vastly improves both the speed and performance of queries, aggregation, and analysis of structured data.
We recently ran performance tests against a number of products including, InfluxData, Cassandra, ElasticSearch, MongoDB, and OpenTSDB. Read more on how we performed.
One feature that really sets Kx apart is its ability to combine streaming, in-memory and historical data in one simple and unified platform; there is no requirement to acquire disparate components to build a hybrid solution.
For many of today’s businesses, the promise of big data analytics is the ability to use both streaming and the vast amounts of historical data effectively. This includes data that will accumulate over future years, as well as data a business may already have warehoused but have never been able to use. The means by which a database uses data storage, both in-memory and on-disk, can make a tremendous difference in the speed and cost of analytics it can produce.
The diagram below illustrates the Kx suite of technology, all built on the core kdb+ engine. Included are standard programing language interfaces, SOA interoperability, complex event processors, and an adapter framework used for ingesting data from external data sources.
Kx technology was created to address one of the most basic problems in high-performance computing: the inability of traditional relational database technology to keep up with the explosive escalation of data volumes. Ever since, our singular goal has been to provide clients and partners with the most efficient and flexible tools for ultra-high-speed processing of real-time, streaming and historical data.
The basis for Kx technology is a uniquely integrated platform which includes a high-performance historical time-series columnar database called kdb+, an in-memory compute engine, and a real-time streaming processor all unified with an expressive query and programming language called q.
Designed from the start for extreme scale, and running on industry standard servers, the kdb+ database has been proven to solve complex problems faster than any of its competitors.
FROM CHIP TO EDGE TO CLOUD
- Reduces processing time from hours / days to seconds / minutes compared to other technologies
- A single Kx core can produce the equivalent performance of hundreds of competitor cores
- Small code base (c. 500 KB)
- Unique language, q
- Lower total cost of ownership, particularly power costs
- Greater speed of solution development
- Combines streaming, in-memory and at rest analytics capability
- Same code handles all the above
- Lower total cost of ownership
- Increased confidence in analytics output
- Optimised and tightly integrated enterprise platform
- Ease of third party integration
- Single support call and reduced complexity
- Lower risk
- The world’s fastest time-series column-store database
- Streaming, real-time and historical data in one platform
- Kx runs on Linux, Windows, Solaris, and MacOS
- Kx runs on commodity hardware, cloud, edge devices/appliances
- Expressive query (qsql) and programming language (q)
- In-memory compute engine for Complex Event Processing
- Column-level compression and sensor data noise filtering
- Integrates easily into legacy systems for performance augmentation
- Multi-core / Multi-processor / Multi-thread / Multi-server
- “Anymap” capability allows developers to query unstructured data held in kdb+
Performance is key to the underlying design of Kx, and the database has a number of important characteristics that contribute to its speed:
- Automatically distributes database operations across CPU cores
- Exploits vector instructions from Intel and ARM chipsets
- Small memory footprint (600kb) exploits L1/2 CPU caches
- Page faults through memory-mapped files
- Natively supports array operations and parallel computations
- Database tables are first class objects in q programming language
- Wide variety of data types for compact storage
- Supports software and hardware compression
- Streaming – Process and analyse 4.5 million bulk events per second per core
- Scan – Search in-memory tables at 4 billion records per second per core
- Batch – Ingest data at 10 million records per second per core
- Store – Accumulate 10 trillion data points (3 PB) of NYSE data
- Usage – Trusted by 17 of the world’s top 20 investment banks
- Volume – Daily volumes of 1.6 TB of streaming data per day
- Scale – From Raspberry Pi, edge devices to 20,000 cores on AWS Cloud
- Performance – Top performing time-series database according to STAC Research
- Footprint – Tiny 500 KB memory profile (L1/2 Cache)
- Latency – Sub-millisecond latency for streaming event processing