I’ve recently decided to move some of the virtual infrastructure in my lab onto Fedora 17. I’ll be running my VMs on KVM utilizing libvirt to manage the VMs. The great thing about this setup is that in theory, by utilizing libvirt, I can easily move my infrastructure to something like oVirt or OpenStack in the future. But for now, I plan to simply make use of a combination of virsh and virt-manager. Getting Fedora 17 onto my host was quite easy, I won’t cover that here. The next thing I wanted to setup was the networking layer.

The Lay of the Land

Before diving into details of my virtual networking configuration, some background on what the setup in my lab looks like. I have two 3560 switches in my lab, connected via a 4-port port-channel using optical connections. I trunk all my VLANs between the switches and let the 3560s hash based on src-dst-ip. The server I am using for this setup has 6 NICs in it, all Intel Gigabit capable. All 6 NIC ports are connected to a single 3560.

Virtual Networking on Fedora

I made the choice early on to utilize Open vSwitch for my virtual networking. This has been a part of Fedora since Fedora 16, and the Beefy Miracle release (17) also includes this fine piece of software. I utilize a variety of VLANs in my lab, thus necessitating trunk ports for some configuration. The first thing I decided to do was trunk some of my management VLANs to a bond interface. The VLANs in question were 64, 66, and 67. I utilized 2 physical ports for this bond interface, and setup the port-channel as LACP.

The configuration on the 3560 end looks like this:

Configuration on the 3560 end of the OVS LACP channel

One the OVS side of things, here is what the configuration looks like in the /etc/sysconfig/networking-scripts/bond0 configuration file. Please note the BOND_IFACES section. This is where you list the physical interfaces you want to be a part of your bond.

bond0 configuration

The configuration, as shown by ovs-vsctl, looks like this:

ovs-vsctl output

Once you have the above working, you should now have the physical side of the OVS bridge working. The next step is to configure your management interface. For this, I simply created a mgmt0 interface, added it to the bridge, and setup a configuration file to have it brought up during system boot. You can see in the previous screen shot what this looks like. Below you will find the actual /etc/sysconfig/networking-scripts/ifcfg-mgmt0 file:

ifcfg-mgmt0 configuration

One Additional Change

There is one additional change which is needed here. I disabled NetworkManager, and went with the old way of configuring networking. To do this, follow these instructions:

Before enabling the old network configuration, make a single change to the openvswitch systemctl file. Edit the file located at /usr/lib/systemd/system/openvswitch.service and remove “network.target” from the “After” section and add a “Before” section with “network.target”. The end result is something like this:

Once you are done with this, make sure to enable both network.service and openvswitch.service:

Conclusion

The end result of the above is that I now have a LACP port-channel between my 3560 and my OVS bridge on the host. I have trunked some VLANs across this, and setup my management interface on the host as a virtual port on the same OVS bridge. This all works really well, and provides robust networking on your Linux host. Future posts will show how to add virtual machines to this OVS bridge!