Pure Storage Inc.

07/21/2025 | Press release | Distributed by Public on 07/21/2025 07:24

How to Double OLTP Throughput without Code Changes: A HammerDB on FlashArray//XL How-To

When Pure Storage® FlashArray//XL™ R5 delivered 2.13 million New Orders Per Minute (NOPM) under HammerDB's TPROC-C benchmark (a 100% improvement over the previous generation), the result wasn't achieved through complex tuning, exotic configurations, or application rewrites.

It came from an architectural shift.

For database administrators tired of choosing between performance and complexity, this is a totally new paradigm. The same SQL Server instances, running identical code, delivered double the throughput by moving to FlashArray//XL NVMe-native architecture.

In this post, I'll break down the test methodology, infrastructure topology, and configuration details that we used to establish those results so you have a replicable blueprint to validate similar performance gains in your own environment.

HammerDB: The Industry Standard for Database Benchmarking

HammerDB has established itself as a database benchmarking application globally, with Fortune 500 companies, commercial and open source database vendors, CSPs, and leading hardware vendors all relying on it for performance insights.

The TPROC-C benchmark, HammerDB's fair-use implementation of the popular TPC-C standard, simulates a complete online transaction processing environment with a mix of transactions that mirror real-world database operations. The benchmark provides a realistic assessment of how storage infrastructure performs under typical OLTP workloads, but always remember, each workload and business process has its own novelties and uniqueness.

What makes HammerDB so great for storage validation is its ability to generate consistent, repeatable workloads that stress every component of the I/O stack. Unlike synthetic benchmarks that may not reflect real database behavior, HammerDB's transaction mix exercises the database code paths, locking mechanisms, and I/O patterns that production workloads encounter.

Test Infrastructure and Methodology

Hardware Configuration

The benchmark environment utilized eight SQL Server instances running on dedicated hosts, each configured with:

  • Direct connection to FlashArray//XL R5 Fibre Channel
  • Identical host configurations (CPU, memory, networking) to eliminate variables between test runs

FlashArray//XL R5 Specifications

The storage platform delivered the following key specifications that enabled the performance breakthrough :

  • Sub-150µs latency consistently maintained under peak load
  • 45GB/s throughput capacity across all concurrent workloads
  • Three times more IOPS per rack unit compared to competitive systems

Database Configuration

Each SQL Server instance hosted a HammerDB TPROC-C schema configured with:

  • SQL Server instances tuned for the workload, following the well-documented best practices set out in this white paper
  • Schemas populated with the same configuration across all instances to ensure consistent workload characteristics

Volume Layout and Storage Design

Simplified Storage Provisioning

One of the most striking aspects of achieving 2.13 million NOPM was the simplicity of the storage configuration. The FlashArray® hardware and Purity software architecture eliminates the complex volume layout decisions that traditionally consume DBA time and introduce performance variability.

The storage design followed Pure Storage-recommended practices:

  • No manual RAID configuration required; FlashArray handles data protection automatically.
  • No QoS tuning needed-built-in intelligence prevents workload interference.

Data Reduction Impact

FlashArray//XL always-on inline deduplication and compression delivered significant capacity efficiency without impacting performance:

  • Zero performance penalty for data reduction operations
  • Reduced physical I/O requirements, contributing to overall throughput gains

Performance Results

The headline result of 2.13 million NOPM demonstrates the real-world performance potential available to any organization running SQL Server on FlashArray//XL R5.

Traditional storage systems often exhibit latency spikes under heavy concurrent load, but the FlashArray//XL NVMe architecture and distributed NVRAM ensure predictable response times regardless of workload intensity.

Latency Consistency under Load

Beyond peak throughput, FlashArray//XL R5 maintained consistent sub-150µs latency throughout the benchmark run. This consistency eliminates the performance variability that creates unpredictable user experiences and complicates capacity planning.

CPU Efficiency Gains

The performance improvement extended beyond storage metrics to overall system efficiency:

  • Reduced I/O wait time, allowing more CPU cycles for application processing
  • Lower system overhead, enabling higher user concurrency per server

Run Your Own Test

To replicate these results in your environment, ensure you have:

  1. HammerDB installed on dedicated test hosts with sufficient CPU and memory
  2. FlashArray connected to the database hosts via FCP, iSCSI, or NVMe-TCP/FC/RoCE
  3. SQL Server instances configured with appropriate sizing for your test scale
  4. Baseline measurements from your current storage infrastructure for comparison

Test Execution Steps

  1. Environment preparation:
    • Install HammerDB on test hosts following the official documentation
    • Configure SQL Server instances with identical schemas
    • Establish storage connectivity and verify NVMe protocol operation
  2. Benchmark configuration:
    • Set appropriate user counts and test duration for meaningful results
    • Configure performance monitoring to capture storage and database metrics
  3. Baseline testing:
    • Run initial tests on existing storage infrastructure
    • Document current performance levels and identify bottlenecks
    • Establish baseline metrics for comparison
  4. FlashArray//XL testing:
    • Migrate test workloads to FlashArray//XL without application changes
    • Execute identical HammerDB tests using the same parameters
    • Monitor performance metrics throughout the test duration

Performance Monitoring and Analysis

HammerDB provides built-in statistics and real-time performance analysis tools that complement Pure Storage monitoring capabilities, enabling comprehensive performance assessment. Effective benchmark analysis requires comprehensive monitoring across multiple layers:

  • Storage metrics: Latency, IOPS, throughput from the Pure1® management interface
  • Database metrics: Transaction rates, wait statistics, resource utilization from SQL Server DMVs
  • System metrics: CPU, memory, and network utilization from host monitoring tools

Beyond Benchmarks: Real-world Implications

This 100% improvement over the previous generation translates directly into real business outcomes:

  • Doubled transaction capacity for customer-facing applications
  • Halved response times for interactive workloads
  • Increased user concurrency without performance degradation
  • Improved batch processing windows for analytical workloads

Related reading: HammerDB benchmarking for SQL Server workloads in Azure with Pure Cloud Block Store

Get Performance without Compromise

The 2.13 million NOPM result achieved by FlashArray//XL R5 under HammerDB testing represents more than a benchmark milestone. It's the end of trade-offs. Database performance no longer means choosing between simplicity and speed.

With HammerDB's industry-standard benchmarking capabilities and FlashArray//XL NVMe-native architecture, you've got a proven path to transform your database performance.

Ready to validate these results in your environment? Download HammerDB from hammerdb.com and contact Pure Storage for a FlashArray//XL performance assessment to discover how your SQL Server workloads can achieve similar breakthrough performance gains.

Pure Storage Inc. published this content on July 21, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on July 21, 2025 at 13:24 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at support@pubt.io