-
Notifications
You must be signed in to change notification settings - Fork 0
ZFS Write Throttle Patch Benchmarks
We want to understand any performance changes introduced by ZFS patch "Illumos #4045 write throttle & i/o scheduler performance work". The Linux port of this patch is available here. The benchmark program fio was run in a variety of configurations as presented below. Builds of ZFS with and without the patch of interest were tested, and results plotted alongside each other for comparison.
Note: an earlier run of these tests revealed a lock contention issue that degraded performance at high thread counts. Those results are available here. That effect was erased by Illumos patch 4347 which was included in the "patched" runs.
- The "unpatched" runs were running https://github.com/zfsonlinux/zfs/commit/b695c34.
- The "patched" runs were running https://github.com/zfsonlinux/zfs/commit/14cecbb + https://github.com/zfsonlinux/zfs/pull/1696. (The patched runs were a few commits ahead of unpathced in the master branch, but the commits mostly related to packaging and other non-peformance-critical fixes).
- All fio runs were performed with these options, where --bs --numjobs and --rw were adjusted appropriately.
fio \
--bs=128k \
--rw=write \
--fallocate=none \
--overwrite=0 \
--numjobs=1 \
--iodepth=10 \
--time_based \
--runtime=1m \
--filesize=225g \
--size=10g \
--randrepeat=0 \
--use_os_rand=1 \
--name=test \
--directory=/tank \
--minimal
- Defaults were used for all configuration options.
- ZFS and SPL built with debugging disabled.
- Each measurement was averaged over five separate runs.
- A brand new zpool was created and modules were reloaded between each run.
- We measured 128k and 8k block sizes.
- We tested four zpool configurations: 1 mirrored pair, 7 mirrored pairs (RAID10), 1 8+2 RAIDz2 vdev, and 7 8+2 RAIDz2 vdevs.
- The number of fio threads ranged by powers of 2 between 1 and 128.
- Each run was time limited to 60s.
- Results are plotted on a log-2 scale for both axises.
- The patched runs outperformed unpatched for all but the 7-8+2raidz2 pool configurations.
- Patched version performed equally to or better than unpatched for all samples.
- Patched version is a clear winner for most cases.
- Patched version shows small dip relative to unpatched for the single-mirrored-pair pool at high thread counts.
- Otherwise, patched and unpatched perform nearly identically.
All drives were Hitachi 2TB SATA Drives in external 60-bay JBODs connected via LSI Fusion-MPT SAS-2 controllers.
# zeno1 /root > sg_inq /dev/disk/by-vdev/A1
standard INQUIRY:
PQual=0 Device_type=0 RMB=0 version=0x06 [SPC-4]
[AERC=0] [TrmTsk=0] NormACA=0 HiSUP=1 Resp_data_format=2
SCCS=0 ACC=0 TPGS=0 3PC=0 Protect=0 BQue=0
EncServ=0 MultiP=0 [MChngr=0] [ACKREQQ=0] Addr16=0
[RelAdr=0] WBus16=0 Sync=0 Linked=0 [TranDis=0] CmdQue=1
[SPI: Clocking=0x0 QAS=0 IUS=0]
length=74 (0x4a) Peripheral device type: disk
Vendor identification: ATA
Product identification: Hitachi HDS72202
Product revision level: A3EA
Unit serial number: JK1170YAHU8JDP
# zeno1 /root > lspci | grep -i LSI
02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
03:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
04:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
85:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
86:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 02)
zeno1: /sbin/zpool create -f tank mirror A1 B1
zeno1:
zeno1: zpool list
zeno1: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zeno1: tank 1.81T 91K 1.81T 0% 1.00x ONLINE -
zeno1:
zeno1: zpool status tank
zeno1: pool: tank
zeno1: state: ONLINE
zeno1: scan: none requested
zeno1: config:
zeno1:
zeno1: NAME STATE READ WRITE CKSUM
zeno1: tank ONLINE 0 0 0
zeno1: mirror-0 ONLINE 0 0 0
zeno1: A1 ONLINE 0 0 0
zeno1: B1 ONLINE 0 0 0
zeno1:
zeno1: errors: No known data errors
zeno3: /sbin/zpool create -f tank raidz2 A31 B31 C31 D31 E31 F31 G31 H31 I31 J31
zeno3:
zeno3: zpool list
zeno3: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zeno3: tank 18.1T 242K 18.1T 0% 1.00x ONLINE -
zeno3:
zeno3: zpool status tank
zeno3: pool: tank
zeno3: state: ONLINE
zeno3: scan: none requested
zeno3: config:
zeno3:
zeno3: NAME STATE READ WRITE CKSUM
zeno3: tank ONLINE 0 0 0
zeno3: raidz2-0 ONLINE 0 0 0
zeno3: A31 ONLINE 0 0 0
zeno3: B31 ONLINE 0 0 0
zeno3: C31 ONLINE 0 0 0
zeno3: D31 ONLINE 0 0 0
zeno3: E31 ONLINE 0 0 0
zeno3: F31 ONLINE 0 0 0
zeno3: G31 ONLINE 0 0 0
zeno3: H31 ONLINE 0 0 0
zeno3: I31 ONLINE 0 0 0
zeno3: J31 ONLINE 0 0 0
zeno3:
zeno3: errors: No known data errors
zeno2: /sbin/zpool create -f tank mirror A17 B17 mirror A18 B18 mirror A19 B19 mirror A20 B20 mirror A21 B21 mirror A22 B22 mirror A23 B23
zeno2: zpool list
zeno2: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zeno2: tank 12.7T 112K 12.7T 0% 1.00x ONLINE -
zeno2:
zeno2: zpool status tank
zeno2: pool: tank
zeno2: state: ONLINE
zeno2: scan: none requested
zeno2: config:
zeno2:
zeno2: NAME STATE READ WRITE CKSUM
zeno2: tank ONLINE 0 0 0
zeno2: mirror-0 ONLINE 0 0 0
zeno2: A17 ONLINE 0 0 0
zeno2: B17 ONLINE 0 0 0
zeno2: mirror-1 ONLINE 0 0 0
zeno2: A18 ONLINE 0 0 0
zeno2: B18 ONLINE 0 0 0
zeno2: mirror-2 ONLINE 0 0 0
zeno2: A19 ONLINE 0 0 0
zeno2: B19 ONLINE 0 0 0
zeno2: mirror-3 ONLINE 0 0 0
zeno2: A20 ONLINE 0 0 0
zeno2: B20 ONLINE 0 0 0
zeno2: mirror-4 ONLINE 0 0 0
zeno2: A21 ONLINE 0 0 0
zeno2: B21 ONLINE 0 0 0
zeno2: mirror-5 ONLINE 0 0 0
zeno2: A22 ONLINE 0 0 0
zeno2: B22 ONLINE 0 0 0
zeno2: mirror-6 ONLINE 0 0 0
zeno2: A23 ONLINE 0 0 0
zeno2: B23 ONLINE 0 0 0
zeno2:
zeno2: errors: No known data errors
zeno4: /sbin/zpool create -f tank raidz2 A47 B47 C47 D47 E47 F47 G47 H47 I47 J47 raidz2 A48 B48 C48 D48 E48 F48 G48 H48 I48 J48 raidz2 A49 B49 C49 D49 E49 F49 G49 H49 I49 J49 raidz2 A50 B50 C50 D50 E50 F50 G50 H50 I50 J50 raidz2 A51 B51 C51 D51 E51 F51 G51 H51 I51 J51 raidz2 A52 B52 C52 D52 E52 F52 G52 H52 I52 J52 raidz2 A53 B53 C53 D53 E53 F53 G53 H53 I53 J53
zeno4:
zeno4: zpool list
zeno4: NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
zeno4: tank 127T 268K 127T 0% 1.00x ONLINE -
zeno4:
zeno4: zpool status tank
zeno4: pool: tank
zeno4: state: ONLINE
zeno4: scan: none requested
zeno4: config:
zeno4:
zeno4: NAME STATE READ WRITE CKSUM
zeno4: tank ONLINE 0 0 0
zeno4: raidz2-0 ONLINE 0 0 0
zeno4: A47 ONLINE 0 0 0
zeno4: B47 ONLINE 0 0 0
zeno4: C47 ONLINE 0 0 0
zeno4: D47 ONLINE 0 0 0
zeno4: E47 ONLINE 0 0 0
zeno4: F47 ONLINE 0 0 0
zeno4: G47 ONLINE 0 0 0
zeno4: H47 ONLINE 0 0 0
zeno4: I47 ONLINE 0 0 0
zeno4: J47 ONLINE 0 0 0
zeno4: raidz2-1 ONLINE 0 0 0
zeno4: A48 ONLINE 0 0 0
zeno4: B48 ONLINE 0 0 0
zeno4: C48 ONLINE 0 0 0
zeno4: D48 ONLINE 0 0 0
zeno4: E48 ONLINE 0 0 0
zeno4: F48 ONLINE 0 0 0
zeno4: G48 ONLINE 0 0 0
zeno4: H48 ONLINE 0 0 0
zeno4: I48 ONLINE 0 0 0
zeno4: J48 ONLINE 0 0 0
zeno4: raidz2-2 ONLINE 0 0 0
zeno4: A49 ONLINE 0 0 0
zeno4: B49 ONLINE 0 0 0
zeno4: C49 ONLINE 0 0 0
zeno4: D49 ONLINE 0 0 0
zeno4: E49 ONLINE 0 0 0
zeno4: F49 ONLINE 0 0 0
zeno4: G49 ONLINE 0 0 0
zeno4: H49 ONLINE 0 0 0
zeno4: I49 ONLINE 0 0 0
zeno4: J49 ONLINE 0 0 0
zeno4: raidz2-3 ONLINE 0 0 0
zeno4: A50 ONLINE 0 0 0
zeno4: B50 ONLINE 0 0 0
zeno4: C50 ONLINE 0 0 0
zeno4: D50 ONLINE 0 0 0
zeno4: E50 ONLINE 0 0 0
zeno4: F50 ONLINE 0 0 0
zeno4: G50 ONLINE 0 0 0
zeno4: H50 ONLINE 0 0 0
zeno4: I50 ONLINE 0 0 0
zeno4: J50 ONLINE 0 0 0
zeno4: raidz2-4 ONLINE 0 0 0
zeno4: A51 ONLINE 0 0 0
zeno4: B51 ONLINE 0 0 0
zeno4: C51 ONLINE 0 0 0
zeno4: D51 ONLINE 0 0 0
zeno4: E51 ONLINE 0 0 0
zeno4: F51 ONLINE 0 0 0
zeno4: G51 ONLINE 0 0 0
zeno4: H51 ONLINE 0 0 0
zeno4: I51 ONLINE 0 0 0
zeno4: J51 ONLINE 0 0 0
zeno4: raidz2-5 ONLINE 0 0 0
zeno4: A52 ONLINE 0 0 0
zeno4: B52 ONLINE 0 0 0
zeno4: C52 ONLINE 0 0 0
zeno4: D52 ONLINE 0 0 0
zeno4: E52 ONLINE 0 0 0
zeno4: F52 ONLINE 0 0 0
zeno4: G52 ONLINE 0 0 0
zeno4: H52 ONLINE 0 0 0
zeno4: I52 ONLINE 0 0 0
zeno4: J52 ONLINE 0 0 0
zeno4: raidz2-6 ONLINE 0 0 0
zeno4: A53 ONLINE 0 0 0
zeno4: B53 ONLINE 0 0 0
zeno4: C53 ONLINE 0 0 0
zeno4: D53 ONLINE 0 0 0
zeno4: E53 ONLINE 0 0 0
zeno4: F53 ONLINE 0 0 0
zeno4: G53 ONLINE 0 0 0
zeno4: H53 ONLINE 0 0 0
zeno4: I53 ONLINE 0 0 0
zeno4: J53 ONLINE 0 0 0
zeno4:
zeno4: errors: No known data errors
All test nodes have 24G RAM and 16 Intel Xeon CPUs. They are running CHAOS 5, a RHEL 6 derivative.
# zeno1 /root > cat /proc/cpuinfo
...
processor : 15
vendor_id : GenuineIntel
cpu family : 6
model : 44
model name : Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
stepping : 2
cpu MHz : 2401.000
cache size : 12288 KB
physical id : 1
siblings : 8
core id : 10
cpu cores : 4
apicid : 53
initial apicid : 53
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc aperfmperf pni dtes64 monitor ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt lahf_lm ida arat epb dts tpr_shadow vnmi flexpriority ept vpid
bogomips : 4799.89
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
# zeno1 /root > free -m
total used free shared buffers cached
Mem: 23894 10431 13462 0 0 1489
-/+ buffers/cache: 8941 14952
Swap: 0 0 0
# zeno1 /root > uname -a
Linux zeno1 2.6.32-358.14.1.2chaos.ch5.1.1.x86_64 #1 SMP Thu Aug 29 16:18:00 PDT 2013 x86_64 x86_64 x86_64 GNU/Linux