Monday, December 9, 2013

VMware vSAN IOPS testing

Take this with a grain of sand, these are only initial figures.  I am using a combination of IOMeter for Windows and fio for Linux.

Baseline redundancy and caching, no storage profiles used, only using vSAN as a datastore (I’ll do the other options later)

My vSAN is made of 3 identical ESXi hosts, with a single SSD Samsung 840 250GB, and two Seagate 750GB SATA drives. vSAN has a dedicated single 1GB connection, no jumbo frames used. (yes there could be bottlenecks at several spots, I haven’t dug that deeply, this is just a ‘first pass’ test)

The end result of this VERY BASIC test is this:

vSAN random reads were an average of 31 times faster than a single SATA disk

vSAN random writes were an average 9.1 times faster than a single SATA disk

 

More Details Below:

Regular single disk performance (just for a baseline before I begin vSAN testing)

Random Read (16k block size)

first test = 79 IOPS

second test = 79 IOPS

Random Write (16k block size)

first test = 127 IOPS

second test = 123 IOPS

vSAN disk performance with same VM vMotion to the vSAN

Random Read (16k block size)

first test = 2440 IOPS

second test = 2472 IOPS

Random Write (16k block size)

first test 1126 IOPS

second test 1158 IOPS

Commands used in fio:

sudo fio --directory=/mnt/volume --name fio_test --direct=1 --rw=randread --bs=16k --size=1G --numjobs=3 --time_based --runtime=120 —-group_reporting

sudo fio --directory=/mnt/volume --name fio_test --direct=1 --rw=randwrite --bs=16k --size=1G --numjobs=3 --time_based --runtime=120 —-group_reporting

I mentioned I did use IOMeter in windows, the initial results were very similar to the fio results above.  I will post those once I have the time try each solution and go deeper into identifying bottlenecks and getting more detailed, adding more hosts, etc…

Sunday, December 8, 2013

VMware vSphere 5.5 vSAN beta ineligable disks

While building my home lab to use vSAN and NSX following Cormac Hogans's great instructions, I've encountered an issue that the disk I am trying to use for vSAN are not showing as available. In the "Cluster/Manage/Virtual SAN/Disk Management" under Disk Groups, I see only one of my 3 hosts has 0/2 disks in use, the others show 0/1. My setup is this, I purchased 3 new 250GB Samsung SSD drives (one for each host), and am trying to re-use 6 older Seagate 750GB SATA drives. My first thought, is why does it only say 0/1 in use on two of the servers?  I have 4 drives in that server, a 60GB boot drive, 1 SSD, & 2 SATA drives, so why doesn't it say 0/3 or 0/4? I noticed in the bottom pane, I can choose to show ineligable drives, there I see the 3 drives I can't use. I understand why I can't use my Toshiba boot drive, but why do my 750GB Seagate drives also show Ineligable?



I played with enabling AHCI, but knowing there is a bug in the beta I wanted to avoid it. See here: http://blogs.vmware.com/vsphere/2013/09/vsan-and-storage-controllers.html. This unfortunately did not change the situation. I finally realized that possibly those drives still had a legacy partition on them. After nuking the partitions on those drives, the disk now show up as eligable drives. I tried this first on my server smblab2, and you see that 0/3 are not in use, which is what I would have expected originally.  Not in use in this context basically means "eligable".


I was then able to Claim the disks for VSAN Use:


Then finally create the disk groups.


Many others suggest running vSAN in a Virtual environment, which is great for learning, you can even get the experience doing the Hands on Labs (Free 24/7 Now!), but I wanted to do some performance testing, and for that I needed a physical environment. Now that I've gotten past my little problem, it's working great!