Notes on tuning postgres for cpu and memory benchmarking

Recently I wanted to measure the impact of NUMA placement and Hugepages on the performance of postgres running in a VM on a Nutanix node. To do this I needed to drive postgres to do real transactions but have very little jitter/noise from the filesystem and storage. After reading a lot of blogs I came up with a process and set of postgres.conf tuneables that allowed me to run HammerDB TPROC workload (TPCC-C like) with very low variation around 0.3% variance (standard deviation/mean).

The tunings are not meant to represent best practices – and running repeatedly (without manually vacuuming, or doing a restore – will create problems because I am disabling autovacuum (see this discussion with HammerDB author Steve Shaw here and here)

Results

I have put the benchmark results below – but the main point of this post is to discuss the method which allows me to generate very repeatable postgres benchmark results where I can drive the CPU/Memory to be the limiting bottleneck. The screenshot below shows 5 runs back-to-back. From top to bottom the output shows

  • SQL commits per minute
  • Database VM CPU usage per core
  • Memory bandwidth (from Intel PCM running on the AHV hypervisor host)
  • Database VM IO rates
Multiple benchmark runs with consistent low jitter results
Continue reading

Running the ML-Perf Storage benchmark on Nutanix files.

Some technical notes on our submission to the benchmark committee.

Background

For the past few months engineers from Nutanix have been participating in the MLPerftm Storage benchmark which is designed to measure the storage performance required for ML training workloads.  

We are pleased with how well our general-purpose file-server has delivered against the demands of this high throughput workload.  

Benchmark throughput and dataset
  • 125,000 files
  • 16.7 TB of data around 30% of the usable capacity (no “short stroking”)
  • Filesize 57-213MB per file
  • NFS v4 over Ethernet
  • 5GB/s delivered per compute node from single NFSv4 mountpoint
  • 25GB/s delivered across 5 compute nodes from single NFSv4 mountpoint

The dataset was 125,000 files consuming 16.7 TB, the file sizes ranged from 57 MB to 213 MB.  There is no temporal or spatial hotspot (meaning that the entire dataset is read) so there is no opportunity to cache the data in DRAM – the data is being accessed from NVMe flash media. For our benchmark submission we used standard NFSv4, standard Ethernet using the same Nutanix file-serving software that already powers everything from VDI users home-directories, medical images and more. No infiniband or special purpose parallel filesystems were harmed in this benchmark submission.

Continue reading

Why does my SSD not issue 1MB IO’s?

First things First

https://commons.wikimedia.org/wiki/File:CDC9762-smd-drive.jpg
CDC 9762 SMD disk drive from 1974

Why do we tend to use 1MB IO sizes for throughput benchmarking?

To achieve the maximum throughput on a storage device, we will usually use a large IO size to maximize the amount of data is transferred per IO request. The idea is to make the ratio of data-transfers to IO requests as large as possible to reduce the CPU overhead of the actual IO request so we can get as close to the device bandwidth as possible. To take advantage of and pre-fetching, and to reduce the need for head movement in rotational devices, a sequential pattern is used.

For historical reasons, many storage testers will use a 1MB IO size for sequential testing. A typical fio command line might look like something this.

fio --name=read --bs=1m --direct=1 --filename=/dev/sda
Continue reading

Paper: A Nine year study of filesystem and storage benchmarking

A 2007 paper, that still has lots to say on the subject of benchmarking storage and filesystems. Primarily aimed at researchers and developers, but is relevant to anyone about to embark on a benchmarking effort.

  • Use a mix of macro and micro benchmarks
  • Understand what you are testing, cached results are fine – as long as that is what you had intended.

The authors are clear on why benchmarks remain important:

Ideally, users could test performance in their own settings using real work- loads. This transfers the responsibility of benchmarking from author to user. However, this is usually impractical because testing multiple systems is time consuming, especially in that exposing the system to real workloads implies learning how to configure the system properly, possibly migrating data and other settings to the new systems, as well as dealing with their respective bugs.”

We cannot expect end-users  to be experts in benchmarking. It is out duty as experts  to provide the tools (benchmarks) that enable users to make purchasing decisions without requiring years of benchmarking expertise.