Hyper App
Back to Journal
Case Study August 12, 2025 8 min read

Case Study:
Building High-Performance Infrastructure with Lenovo

Technical Team
Technical TeamSRE & Performance Engineering
Share
Case Study: Building High-Performance Infrastructure with Lenovo

How we partnered with Lenovo to engineer hyperconverged infrastructure that delivers 2.7× the compute throughput of AWS equivalent instances — entirely within Uzbekistan's borders.

The Problem We Were Solving

When we began building Hyper App, the Central Asian enterprise cloud market had a fundamental problem: every available option was either local (low quality, unreliable, no SLA) or foreign (high performance, but high latency, high cost, and legally complicated for regulated industries). There was nothing in the middle.

Our goal was to deliver infrastructure that could pass the same performance bar as AWS or Azure — not approximately, but measurably, with published benchmark results that clients could independently verify. That required a hardware partnership with a vendor that could deliver Tier III-grade compute at the density and reliability we needed.

Why Lenovo ThinkAgile HX

After evaluating multiple vendors, we selected Lenovo's ThinkAgile HX series for the primary compute layer. ThinkAgile HX is a hyperconverged infrastructure (HCI) platform that integrates compute, storage, and networking into purpose-built nodes, managed by Nutanix AOS. This gives us several specific advantages over traditional three-tier architectures:

Compute Layer

  • • Intel Xeon Gold 6300 series processors
  • • Up to 3TB RAM per node for memory-intensive workloads
  • • PCIe Gen 4 NVMe drives for sub-100μs storage latency
  • • Hardware-assisted virtualisation (Intel VT-x, VT-d)

Network Fabric

  • • NVIDIA (Mellanox) ConnectX-6 Dx 100GbE NICs
  • • Non-blocking leaf-spine switching architecture
  • • RDMA over Converged Ethernet (RoCE) for storage traffic
  • • Single-digit microsecond east-west latency between nodes (via RoCE)

Benchmark Methodology and Results

We ran a structured benchmark programme comparing our infrastructure against AWS m6i.4xlarge (the closest equivalent by vCPU and RAM specification). Tests were run over 72 hours to account for variability, using industry-standard tools: sysbench for CPU throughput, fio for storage I/O, and netperf for network performance.

Results: 2.7× higher CPU throughput on compute-bound workloads (sysbench threads test, 64 threads). 3.1× higher random read IOPS on storage (fio 4K random read, queue depth 32). Network throughput to external endpoints was comparable, though our intra-region latency to Tashkent-based clients was dramatically lower due to physical proximity.

Our first banking client — a regional payments processor — saw transaction processing throughput increase by 2.4× after migration, and average transaction latency drop from 92ms (AWS Frankfurt) to under 4ms. These are independent measurements from their own APM tooling, not our benchmarks.

"We ran the benchmarks ourselves before committing. The numbers were not marketing — they were real. And the migration was cleaner than we expected."
— Head of Infrastructure, Central Asia FinTech (name withheld by request)

What This Means for Your Workloads

The practical implication of 2.7× compute throughput is that most clients can run the same workload on fewer, smaller instances — which directly reduces the monthly bill. Combined with the elimination of egress fees and the removal of Law 213 compliance overhead, the typical client saves 40–50% on their total cloud cost while getting meaningfully better performance.

If you want to run the benchmark yourself, we can provision a free trial environment. We publish our methodology and you're welcome to use your own test suite.