Tuesday, September 11, 2012

Disable Fibre Channel HBA so I can connect to another Fabric

We are doing a forklift upgrade of our servers, in order to do so, I would like to connect my esx hosts to both fabrics for a while so I can transfer the VM's over that network and not the front end ethernet network.  Because I don't want to make changes to the legacy Fibre, and I don't want to connect the fabrics more than I must, I will disconnect the redundant fibre cable from each ESX host and connect it to the new fibre.  Here are my steps.
First go into vcenter, identify the correct HBA WWN, I just use the last octet, so I want to re-use "86", so that is the HBA I am going to disconnect from the existing Fibre.










Then Go into the Properties of each Datastore and click "Manage Paths", change the path selection policy to "Fixed" (so we can control what path the Host is using to access the current storage), Click "Change", click the status of the current path you wish to keep (not using 86), and click Preferred.  After that, click the paths that 86 is using, and choose Disable.  It should then look something like below, you can see the Adapter listed after the Path is chosen.

























Click Close and repeat for all datastores.  You also can verify/do this work from the Configuration/Storage Adapter page, which will look something like below, Note that we are looking at HBA2 (86), Under "Paths" and you can see status is Disabled.


























After disabling all of the paths under each datastore, you may still have some paths left, those are just the connections to the array, but not actually a connection to a LUN.  You can disable those if you choose, I do because I try to never leave anything to chance.  At this point this HBA is no longer in use and is ready to be re-used to connect to the new Fibre.

In my case, I am using an HP server with HBA's installed into PCIe slots.  My question is, which HBA is mapped to which PCIe Slot (I don't want to disconnect the wrong one since I just removed redundancy).  I am using ESXi 5.x, so I have enabled SSH for troubleshooting.  I tried logging into iLo, however it did not have PCIe slot card information, so I decided to go right twards the horses mouth so to speak and have SSH'd into my server and ran the following command:
esxcli hardware pci list
but that gave me alot of information I couldn't use, so then I tried:
lspci
which gave me:
000:067:00.0 Serial bus controller: QLogic Corp ISP2432-based 4Gb Fibre Channel to PCI Express HBA [vmhba1]
000:070:00.0 Serial bus controller: QLogic Corp ISP2432-based 4Gb Fibre Channel to PCI Express HBA [vmhba2]
This is perfect information, because the HP Quickspecs for the DL385G2 has the following info:
ss4
You can see Expansion Slot #1 is Bus Number 70, which from lspci above is vmhba2, and Expansion Slot #2 is Bus Number 67, which is vmhba1

1 comment:

Unknown said...

Hello Brian,

I was very impressed with your blog about disabling Fibre channel HBA, as I was thinking exactly on what you describe, when I stumbled on it.
I just wonder, you never finished! Were you able to connect another SAN storage, mount a datastore and vmotion VMs?
I see you wrote this back in 2012! Like 5 years ago! Anyway, I plan to do exactly this with 4 hosts to migrate all my VMs off an old CX500 into a less old NetApp V3140. Just not sure if I should pay attention to any other things.
Well, I appreciate your blog and would really be happy to know your thoughts, or the conclusion of your migration technique!

Thanks!

Mau