Preface
After having to configure VMWare ESX and a Cisco switch to allow for bonding multiple NICs together into one big pipe, I decided to write notes about it so I wouldn’t have to search the web for how-to’s again.
ESX Configuration
I started by configuring the ESX server to bond 4 physical adapters to a virtual switch. This can be done with the esxcfg-vswitch command, but it’s certainly easier with the Virtual Infrastructure Client: Click Configuration–>Networking then click Properties on the virtual switch you with to make a trunk out of. (Cisco calls a trunk an Etherchannel). Then click the Network Adapters tab and add the adapters you with to use. NOTE: Be SURE the service console you’re using to configure ESX is *NOT* on the virtual switch you’re editing. Unless you get the configuration perfectly correct the first time, you’ll lock yourself out! You have been warned! The ESX side is pretty easy. However, if you have VLAN’s you’ll want to make sure you pay attention to the default VLAN and don’t enable tagging for any service consoles, VMKernel’s or Virtual Machine networks that you want on the default VLAN.
Cisco Configuration
The Cisco side requires more work. After searching the web for examples of how the switch should be configured for ESX, I found multiple blogs and articles and they each said something different. Some differences were no big deal while others were significant. I started by configuring the Cisco switch the same way I configured it for the PACS archiving head, which handles many DICOM query/retrives throughout the day. After this came tweaking it to specifically work with the type of trunking that ESX uses. None of the articles I found online worked for me and the configuration that we had. So, once it worked I had to take notes. Here’s what I ended up with:
interface Port-channel21
description ESX1 4 port aggregate
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,3,4,9
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
!
interface GigabitEthernet3/19
description ESX1 4 port link aggregate
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,3,4,9
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 21 mode desirable
!
interface GigabitEthernet3/20
description ESX1 4 port link aggregate
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,3,4,9
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 21 mode desirable
!
interface GigabitEthernet4/19
description ESX1 4 port link aggregate
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,3,4,9
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 21 mode desirable
!
interface GigabitEthernet4/20
description ESX1 4 port link aggregate
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 1,3,4,9
switchport mode trunk
switchport nonegotiate
spanning-tree portfast trunk
channel-group 21 mode desirable
!
As you can see, this is for a 4 port aggregate link, where the ESX server has access to VLAN’s 1,3,4 & 9. The key details that I found necessary are the “dot1q”, “nonegotiate” and “desirable” lines. 802.1q is basically a de-facto standard, but of course this is necessary to speak to ESX. If both sides are not speaking the same language, it just won’t work. NOTE: If you make a change, sometimes it’s helpful to issue “shutdown” on the port group, wait 5 seconds then issue “no shutdown”. Of course, wait from 20 seconds to 1 or 2 minutes for the link to settle on all levels it needs to (physical link, virtual switch, virtual devices). After 2 minutes, if you can’t ping a service console or VMKernel, try again.
Summary
I hope this helps someone with their endeavors into the virtual world. Of course, if you need help, our services are available for hire.