
KDB-X GPU Acceleration
KDB-X GPU Acceleration brings NVIDIA AI acceleration directly into the world’s fastest time-series engine. Offload the most compute-intensive q workloads to GPUs and deliver 10×–20× performance improvements, without changing how your team works.
Same q workflows. Massively more compute.
KDB-X GPU Acceleration is an AI-enabled version of KDB-X that offloads the most compute-intensive analytics from CPU to NVIDIA GPUs while preserving familiar q workflows. Designed for large sorts, joins, aggregations, and simulations, it enables teams to shrink batch windows, increase research throughput, and unlock real-time and intraday analytics that were previously impractical on CPUs alone.
Built for the realities of capital markets
Capital markets workloads are time-critical, data-intensive, and unforgiving. KDB-X GPU Acceleration targets the true bottlenecks, where performance directly impacts risk, cost, and opportunity.
Shrink batch windows and
hit earlier deadlines
Accelerating GPUs dramatically reduces runtimes for core kdb/q workloads like large sorts, qSQL-style analytics, and time-series joins. Jobs that once took hours can complete in minutes, freeing capacity to run deeper, more accurate workloads within fixed operational windows.
Make analytics interactive, not overnight
By speeding up functional qSQL, VWAP, joins, and simulations, KDB-X GPU Acceleration enables faster research iteration, intraday recalculation, and near-real-time insight, without waiting for end-of-day results or cutting analytical fidelity.
Do more with the same infrastructure footprint
Offloading compute-heavy primitives to GPUs allows teams to consolidate workloads that would otherwise require larger CPU clusters. The result is higher throughput per node, reduced infrastructure sprawl, and better performance per dollar.
KDB-X GPU Acceleration delivers GPU speed where it matters most—inside the analytics engine itself.
- GPU-accelerated sorting, aggregations, and joins
- Native support for time-series and tabular market data
- Seamless movement of data between CPU and GPU memory
- Support for modern NVIDIA GPUs, including A100, H100, and B200
- Deployment across cloud, Docker, and on-prem environments
All without requiring CUDA expertise or rewriting existing q code.
Feature highlights
q-native
GPU API
Access GPU power through a familiar q interface using the .g namespace—keeping teams productive and eliminating the need for specialized GPU programming.
High-performance
time-series joins
Accelerate as-of joins and other core time-series primitives that underpin TCA, risk, and execution analytics.
Seamless CPU ↔ GPU
data movement
Move tables between CPU and GPU memory with simple function calls, enabling hybrid workflows that combine CPU flexibility with GPU acceleration.
Scalable workload
distribution
Distribute GPU workloads across nodes using parallel execution, enabling higher throughput and better utilization in clustered environments.
GPU-accelerated
qSQL-style analytics
Run accelerated selections, aggregations, and sorting operations directly on the GPU to dramatically reduce execution time for large datasets.
High-throughput I/O (future release)
GPUDirect Storage support enables direct disk-to-GPU data transfers for large splayed kdb tables, reducing CPU overhead and accelerating large backfills and batch processing.
Solution use cases
End-of-Day (EOD) Processing
End-of-day workloads run against immovable deadlines. KDB-X GPU Acceleration shortens batch windows, reduces reruns after late data fixes, and enables deeper scenario analysis without increasing operational risk. The result is a more reliable close that scales as data volumes grow.
Value at Risk (VaR)
KDB-X GPU Acceleration enables significantly more scenarios to be run within the same time window, stabilizing tail risk measures and supporting finer-grained VaR calculations. Faster runtimes make intraday refreshes practical, improving responsiveness to market moves and reducing overnight processing risk.
Backtesting and Quant Research
Dramatically improved performance allows teams to replay longer histories, test more strategies, and run heavier parameter sweeps, without sacrificing fidelity. Faster reruns make iteration cheaper, results more robust, and research easier to operationalize.
Why choose KDB-X GPU Acceleration
Targets the real bottlenecks
Targets the real bottlenecks
Designed for large aggregations, joins, sorting, and simulations, not small workloads where GPU transfer overhead dominates.
Production-ready GPU support
Production-ready GPU support
Built for modern NVIDIA GPUs and enterprise deployment from day one.
Keeps teams in q
Keeps teams in q
No CUDA expertise required. Same language, same workflows, massively faster execution.
Unified platform
Unified platform
Consolidate research, risk, execution, and analytics into one GPU-accelerated, time-aware engine without fragmented pipelines or bolt-on tooling.
Deployment and operating flexibility
KDB-X GPU Acceleration deploys wherever your workloads run: on-prem, in the cloud, or in containerized environments. It supports modern NVIDIA GPUs and integrates cleanly into existing KDB-X architectures, allowing teams to selectively accelerate the workloads that benefit most while continuing to run others on CPU.
Demo the world’s fastest database for vector, time-series, and real-time analytics
Start your journey to becoming an AI-first enterprise with 100x* more performant data and MLOps pipelines.
- Process data at unmatched speed and scale
- Build high-performance data-driven applications
- Turbocharge analytics tools in the cloud, on premise, or at the edge
*Based on time-series queries running in real-world use cases on customer environments.
Book a demo with an expert
"*" indicates required fields




