Monday, January 23, 2012

Setup VLAN tagging for ESXi on a Dell Blade Server using Dell PowerConnect M8024-k chassis switches

When using ESX on a server it is good to have a lot of network cards, especially if using vCD (vCloud) .  This is a follow up to my previous post about dividing a single Dell NIC into multiple nics (partitioning)

Before you begin, you need to design your setup, for my example I want 9 VLAN’s.

1010 – mgmt (the vCenter & vCM & vCell, etc…)

1020 – guests (external facing)

1030 – vMotion

1040 – NFS (not using FC)

4010 – vcdni1

4020 – vcdni2

4030 – vcdni3

4040 – vcdni4

4050 – vcdni5

Step 1 Login and look around.

Login to Dell OpenManage, Open Switching/VLAN/ choose “VLAN Membership”.  Out of the box you will only see the default VLAN 1.  Assuming you are using multiple chassis with clusters that will span these chassis like I am, you will need Tagging to flow through your core switches into these.  On this default VLAN, you will see the “Lags” at the bottom left hand side, the Current is set to “F” which is forbidden, meaning this default VLAN will not pass through the trunk into the core switch and cross chassis, for the default VLAN 1, this is good, for the other VLANs, we will change it.


Step 2, Add VLAN’s. 

Under “VLAN Membership”, click “Add” near the top, type in your VLAN ID and name, then click Apply.  Repeat step to create all of your VLAN’s.


Step 3, Change VLAN Lag type to Tagged.

Click on Detail after you have created your VLANs.  Choose your VLAN under “Show VLAN”, Under Lags, change the “Static” box from “F” to “T” by clicking it a couple times.  Then click apply, repeat for all new VLAN’s you’ve created that you want to have flow through your core network.


Step 4, Change VLAN Port Settings to “Trunk”

**Warning** before you do this, you must have iDRAC console access to your blade or you may lose connectivity to it****

Click on “Port Settings” which is just below “VLAN Membership”.  You will now see the port Detail page.  For each Port Te1/0/1 through Te1/0/16 change the Port VLAN Mode from Access to Trunk, then Click Apply.  This will allow the blades to pass multiple VLAN networks to and from themselves.    Repeat this step 16 times.


Step 5, Modify ESXi to accommodate the new VLANs

Open a iDRAC session to your blade(s).   F2 to Login, Choose “Configure Management Network”, Choose “VLAN (optional)”, then type in your VLAN ID


Hit Enter, then ESC.  It will ask if you want to Apply changes and restart mangement network?  Say (Y)es.

NOTE ***In the dell Switch UI, make SURE to click the little floppy disk picture in the upper right to Save your work when your done or you’ll get to repeat it after your next power outage like I did***

You should be done, repeat these steps to get all your blades online and using VLAN’s.


Jimbosan2012 said...


Would I be able to do the same scenario on a chassis that has M6348? It doesn't seem to allow me to trunk the LAGs in step 3. Or does trunking only permissible on the M8024?

Brian Smith said...

I don't have any M6348's to test with, but it looks like this would work

Credo said...

My blade has the 8024-k, however I'm unable to Tag the ports after I create the vlans. My only options are "F", "U", and blank.

Am I missing something ?!?!?!

Thank you in advance.

Ned Polian said...

Yes, great US Military force. Also, in his post you have given a chance to listen about US Military. I really appreciate your work. Thanks for sharing it. Dell i3147-3750sLV

Viking62 said...

Hi, I have the same problem as Credo. I don't see a T option only U and F - any ideas?

Unknown said...

Brian, i'm diggin your site, very cool. I tried to follow your steps but finding the same issue Viking62 had, with only U and F as options. I tried doing this without it and saved the config. As soon as i did that, the switch had a B*tch fit. Trunking is set up on an external switch so as soon as i had set up my trunk, it brought everything down. ( i wasn't everybody's favorite guy). :)
the reason i'm even going through this method is because i'm trying to figure out, why i'm able to vmotion powered down vm's but not live. According to vmware, everything is good on the vcenter side. Which leads me now in this direction. Check what my status is below;

HOST to VM on same HOST = GOOD
HOST to VM on different HOST = BAD
VM to VM on the same HOST = BAD
VM to VM on different HOST = BAD

Any advice is greatly appreciated.