Testing ZFS vs ext4 with sysbench (Google Cloud)

I’ve had a chance to revisit ZFS lately and decided to take some more notes. One highlight of this test is revalidating a quick performance test against ext4. This test is ran on Google Compute Engine n1-highmem-8 (8 vCPU, 52GB RAM) with 2x375G local SSD attached via NVMe and 4x500G standard persistent disks using the Ubuntu 16.04 cloud image.
  • For ext4, I simply created a RAID10 software RAID volume.
  • For ZFS, I used a RAID10 pool with the local SSDs as separate SLOG.
    • innodb_flush_method=O_DSYNC
    • innodb_doublewrite=0
  • The tests are IO bound with only 1G buffer pool and 40G+ of total InnoDB data.
  • sysbench read/write, index only update and read only tests were executed against Percona Server 5.7.19.
Commands can be found from this gist including the MySQL Sandbox command used to create the test instances. 
Let me explain the ZFS parameters first:
  • compressions=lz4  We use compression unless there is compelling reason not to. It is faster to decompress and compress and read/write smaller data than read/write the same amount of data from disk.
  • recordsize=16k/128k  InnoDB data file IO is aligned to its page size while the redo log can be bigger depending on how much data gets flushed from the log buffer.
  • ashift=12  Most modern storage devices uses 4K block sizes even though they sometime advertise 512b. Google Storage uses 4K based on my tests.
  • logbias=throughput  Do not use the Separate ZIL (SLOG), this is better as the number of pool disks increases and improves throughput. The other option is latency if latency for synchronous writes is more important. Supposedly, there are no writes to the SLOG when using throughput but this is not the case from our test which needs further verification.
  • atime=off  Reduces metadata writes to MySQL data files and into the Adaptive Read Cache.
  • sync=standard  This instructs the ZFS IO scheduler to sync to disk when needed. For example, on our test, we use innodb_flush_method=O_DSYNC, this means the redo log writes will always be synchronous while data files will be flushed to disk when fsync() is called from MySQL.
  • primarycache=metadata  We did not use an L2ARC, it would have helped with the read only tests but I wanted to see how far both configurations could drive the physical disks.
The results:
The ZFS numbers shows between 1.5x to 2x better QPS and TPS and better overall latency. While this is good and all, I am not entirely sure whether O_DIRECT is the best flush method on Google storage. I would assume that GCP persistent disks are POSIX compliant and if that is the case, I’d also need to test O_DSYNC.
For now, the potentials are really exciting:
  • With ZFS compression, depending on how compressible your data is, we can fit more data into the same hardware configuration compared to ext4.
  • Take advantage of ZFS snapshots for backups, especially for large datasets and transfer it directly between nodes.
  • Support for encryption (pending upstream merge, another blog) with the same or better performance than LUKS/dm-crypt.
GCP has support for disk encryption and snapshots, so ZFS may be redundant. However, ZFS has advantages over snapshots if you use multiple disks and sending it directly to another node (think PXC SST for large data, another blog :)). Second, I intend to redo these tests when I get my hands on the right hardware.
What’s next?
  • Test with O_DSYNC.
  • How much impact does logbias=latency have with the NVMe SLOG.
  • Will ZFS + snapshots on large datasets be better with PXC?
  • Measure encryption performance against LUKS.