The balug VM (which also hosts SF-LUG non-list content),
just got a performance/reliability boost for at least its
nominal hosting environment. It's virtual disk had been on
ZFS (under fuse) with aggressive compression and deduplication,
which optimized for space (atop SSD), but gave performance
that was sometimes problematic (notably on a physical host
not having an overabundance of RAM). Anyway, just moved
that to SSD - with none of those space (but not performance)
optimized ZFS layers ... much snappier and faster now.
As for drive(s) on the alternate ("vicki") physical host,
those are mirrored spinning rust ... but quite "fast enough"
for balug VM purposes. On the nominal location, that SSD
for the balug VM isn't (yet) mirrored ... but fairly likely
I'll add that in near future (the VM not only gets
relatively regular backups, but also each time a live
migration is done, that also creates an inherent backup upon
the physical host it moves from - and that's generally done
at least 4 times per month). Both hosting environments
do have LVM layers (negligible performance hit, substantially
aids manageability), and the nominal host also has LUKS
encryption layer (again negligible performance hit given the
power of non-ancient CPU(s)/cores).
before:
# time dd if=balug-sda bs=512 of=/dev/null
33554432+0 records in
33554432+0 records out
17179869184 bytes (17 GB) copied, 288.979 s, 59.5 MB/s
real 4m48.980s
user 0m2.920s
sys 0m20.852s
#
after:
# time dd if=balug-sda bs=512 of=/dev/null
33554432+0 records in
33554432+0 records out
17179869184 bytes (17 GB) copied, 44.2483 s, 388 MB/s
real 0m44.251s
user 0m2.852s
sys 0m29.280s
#
Or with a different block size (128 KiB is optimal for my ZFS
configuration), and even with it no longer having any (notably RAM)
contention of the VM running from the older storage:
old:
# time dd if=balug-sda.old bs=131072 of=/dev/null
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 75.5826 s, 227 MB/s
real 1m15.589s
user 0m0.096s
sys 0m5.060s
#
new:
# time dd if=balug-sda bs=131072 of=/dev/null
131072+0 records in
131072+0 records out
17179869184 bytes (17 GB) copied, 40.3681 s, 426 MB/s
real 0m40.372s
user 0m0.072s
sys 0m19.776s
#
But it did have that space optimization (note physical vs. logical):
# ls -hls balug-sda.old
5.2G -rw------- 1 root root 16G Nov 5 16:10 balug-sda.old
#
Within the VM itself, we see much more than 5.2G used:
# df -h -t ext3; swapon -s
Filesystem Size Used Avail Use% Mounted on
/dev/dm-0 469M 237M 209M 54% /
/dev/dm-1 1.7G 1.4G 178M 89% /usr
/dev/sda1 244M 33M 201M 15% /boot
/dev/mapper/balug-home 1.1G 635M 432M 60% /home
/dev/mapper/balug-var 4.6G 3.9G 560M 88% /var
Filename Type Size Used
Priority
/dev/dm-4 partition 131068 130944 -1
/dev/dm-9 partition 131068 130924 -2
/dev/dm-6 partition 131068 130816 -3
/dev/dm-7 partition 131068 94456 -4
/dev/dm-5 partition 131068 0 -5
/dev/dm-11 partition 131068 0 -6
/dev/dm-10 partition 131068 0 -7
/dev/dm-8 partition 131068 0 -8
#
That's 16 GiB virtual drive, about 9.1 G of filesystem+swap space,
in 5.2 of physical.
The VM could certainly still use more RAM - that's its
main performance bottleneck (given workloads & software and
such). The constraints on that is the physical hosts upon
which it runs. One of them ("vicki") only has 2 GiB of
physical RAM total ... hence the VM is still at 1 GiB (the
other physical host where it's nominally located, has 8 GiB,
but that host also does lots of other stuff too, so also doesn't
exactly have lots of RAM to spare). Anyway, may also come up
with some more RAM resources in future (may shift the "alternate"
physical host to one where more RAM is available). Ideally
I'd like to bump the balug VM up to something in the 1.5 to 2 GiB
range - possibly more ... but have to reasonably have that to spare
from the physical hosting environments first. The balug VM does
also have and use swap ... but that swap now also benefits from
the improved virtual drive performance - at least on its nominal
host.