While building my home lab to use vSAN and NSX following Cormac Hogans's great instructions, I've encountered an issue that the disk I am trying to use for vSAN are not showing as available. In the "Cluster/Manage/Virtual SAN/Disk Management" under Disk Groups, I see only one of my 3 hosts has 0/2 disks in use, the others show 0/1. My setup is this, I purchased 3 new 250GB Samsung SSD drives (one for each host), and am trying to re-use 6 older Seagate 750GB SATA drives. My first thought, is why does it only say 0/1 in use on two of the servers? I have 4 drives in that server, a 60GB boot drive, 1 SSD, & 2 SATA drives, so why doesn't it say 0/3 or 0/4? I noticed in the bottom pane, I can choose to show ineligable drives, there I see the 3 drives I can't use. I understand why I can't use my Toshiba boot drive, but why do my 750GB Seagate drives also show Ineligable?
![](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_svDywqYlt_zq3iH7Oeo7N00iR8PdaLgkk1qL6Gt3j8HqtJWbknmRHieKpo1r3iFwnPKlwySUpJt3EmDfotuROqNvkz964UHt31D7o__KJEwpV9sEdYYUFGdEU=s0-d)
I played with enabling AHCI, but knowing there is a bug in the beta I wanted to avoid it. See here: http://blogs.vmware.com/vsphere/2013/09/vsan-and-storage-controllers.html. This unfortunately did not change the situation. I finally realized that possibly those drives still had a legacy partition on them. After nuking the partitions on those drives, the disk now show up as eligable drives. I tried this first on my server smblab2, and you see that 0/3 are not in use, which is what I would have expected originally. Not in use in this context basically means "eligable".
![](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_suxhncyqvtQPd8AHMGLLPO350C_afYZxE5C4zO9qHh0HtpyiZIQRjn5UZcn4w8YQ16InIY1OoThjx-98EbQ9IJAh9BQX82_JRiumhGCL6YPL9e22iZPBjagg=s0-d)
I was then able to Claim the disks for VSAN Use:
![](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_sTXcl4CAeGmTQGoVA77eR_ELl0l3Xl-5vni9Euk8XTeHwYj2zunB4gF_ENC6RfdrOe67ToU2BbugRIWNRPRMicD_DUJBya8DBqM-cqGnzsV2yH0cJSQh99s62PY-aq=s0-d)
Then finally create the disk groups.
![](https://lh3.googleusercontent.com/blogger_img_proxy/AEn0k_sMzqMIgaZF5fcY5iwQ0C_p3KF8RF0GiIsRQ-XMK82ZOx_9jJFeYpk-FgCZhm9NLuisJ-45bmH2snz-FvePPZbuz8fJ9xnzJudczVsPUPAw5c727RPousxWcVUQNZIr=s0-d)
Many others suggest running vSAN in a Virtual environment, which is great for learning, you can even get the experience doing the Hands on Labs (Free 24/7 Now!), but I wanted to do some performance testing, and for that I needed a physical environment. Now that I've gotten past my little problem, it's working great!
I played with enabling AHCI, but knowing there is a bug in the beta I wanted to avoid it. See here: http://blogs.vmware.com/vsphere/2013/09/vsan-and-storage-controllers.html. This unfortunately did not change the situation. I finally realized that possibly those drives still had a legacy partition on them. After nuking the partitions on those drives, the disk now show up as eligable drives. I tried this first on my server smblab2, and you see that 0/3 are not in use, which is what I would have expected originally. Not in use in this context basically means "eligable".
I was then able to Claim the disks for VSAN Use:
Then finally create the disk groups.
Many others suggest running vSAN in a Virtual environment, which is great for learning, you can even get the experience doing the Hands on Labs (Free 24/7 Now!), but I wanted to do some performance testing, and for that I needed a physical environment. Now that I've gotten past my little problem, it's working great!
No comments:
Post a Comment