Virtual Networking Experience From The Field

1 December 2015

Cisco's vPath is becoming well established within Nexus 1000V customer networks. I recently found myself with a requirement to add a load-balancing and three-tier firewall function to a VMware / FlexPod environment that did not have any existing hosting capability.

At first I presented a typical "VLAN sandwich" routed design to the customer, with traffic passing from the Internet through a web farm, then middleware layer, finally hitting a database/internal segment. However the inherent complexities of this build are:

  • Plumbing of VLAN's in/out of UCS
  • Server-to-VIP traffic that bypasses the load-balancer on the return flow
  • Preservation of client IP's in the web logs

Without any of this infrastructure in place already we came to the conclusion that we could achieve the same results using a virtual network, namely Cisco vPath. The set of products in scope for this were:

  • 1000V Nexus Switch, Advanced Edition
  • Cisco Virtual Security Gateway (VSG)
  • 1000V Citrix NetScaler
  • Cisco Prime Network Services Controller (NSC)

VLANs.jpg

What's the experience like from network engineering perspective?

For starters the design work is drastically simplified. We put all "vServices" onto a common segment for their control traffic to communicate. Incorporation of existing servers into the vService chain is achieved by creating new port-profiles - all on the existing flat VLAN structure.

A few tweaks were required to get the vService nodes such as the NetScaler and the VSG operational, but after that point it really was as simple as dropping VM's into their respective port-groups for them to get appropriate treatment.

Virtual Security Gateway

Prime NSC gives you an interface into creating security policies on the VSG. It's hardly a full firewall management application, but then it does provide a simpler function in terms of splitting your flat server network into zones, rather than providing a threat defence layer at the edge.

You quickly realise the power of being able to match VMware attributes in the policy - VM name, VM port-group. These can be used to build up either specific or pattern-matched policies that then extend as the VMware admins build out the server infrastructure. It's a very easy tie-in to allow new servers to be assigned a port-group that gives them all the appropriate firewall permissions. It’s a shame that this "VM awareness" has not been extended to the NetScaler, which still relies on manual IP address definition for servers.

NetScaler 1000V

The recent Cisco - Citrix partnership has resulted in a rebranded NetScaler product that is purchased and supported through Cisco. Support is backed off to Citrix when needed, but you still get a single point of contact for coordination of issues.

The advantage is that the NetScaler software (now on version 10) brings lot of maturity and rich functionality in terms of load-balancing, content switching, application firewalling, etc. In our design the packets arrive from the Internet via a northbound interface which are then load-balanced to servers at the back-end via vPath.

Where are the packets flowing?

The setup process of vservice nodes and the 1000v switch does not really illuminate the actual packet flow. Also any existing documentation depicts vPath at a very high level; showing squiggly lines between the various functions.

I ended up having to create a specific diagram to explain the hop-by-hop flow to the client.

Watch out for MTU

MTU is mentioned throughout the Cisco documentation targeted at the VSG firewall, however Cisco or Citrix have not documented the lesser-known MTU restriction on the NetScaler VPX. The NetScaler VPX (same product as 1000V NetScaler) does not support jumbo frames, therefore the vPath overhead shrinks the effective pipe down to 1434 bytes. This requires fragmentation back on the server/client in order to fit through - fragmentation is not offered on the 1000V Virtual Ethernet Module (VEM).

Packets above 1434 cause "fragmentation needed" ICMP messages to be generated from the VEM back to the source of the traffic, therefore you must setup full reachability from the VEM to the rest of the network. Take into account host routes on ESX, firewalls blocking ICMP, etc. The first signs of trouble are slow-loading web pages as TCP attempts to use its windowing to work around the loss of every packet above 1434 bytes.

In conclusion

The solution is highly flexible in terms of dropping servers into and out of policies for load-balancing and firewalling. Couple this with the ability to create firewall rules that are abstracted away from IP addresses and you start to see the network administrator taken out of many of the typical day-to-day provisioning steps.

Want to talk about a similar challenge or solution? Call us on 01273 957500 or send us a message

 

Image credit: David/Flickr, Creative Commons