Network Virtualization with Dell and VMware NSX

hello and welcome to another bell networking my name is Mary your type of marketing engineer at devil today we’re going to take a look at network virtualization without infrastructure and VMware NSX I’ll demo network virtualization with VMware NSX over a reliable cost-effective converged infrastructure this presentation and demo will highlight a complete end-to-end converged infrastructure optimized simplified and robust underlay network consolidated infrastructure to decrease cost and flexibility and cost savings with speed of deployment automation and minimize downtime the success of server virtualization over the last decade has not brought into the forefront they need to also virtualize the network or to couple the network services from the underlying physical infrastructure server virtualization decoupled the operating system from the underlying hardware this provided flexibility and that allowed us to move VMS from one server to another with no downtime however the VMS are still tied to the network services for example we could not move a VM on a server in rack 1 to a server in rack 2 unless the top of rack switch in rack 2 had the same network configuration and services the top of rack switch in rack 1 had that were utilized or needed by that VM such as the Craig VLAN ACLs or any other security configurations the same benefits that made server virtualization incredibly popular and successful is now also driving network virtualization is part of the software-defined data center some of these benefits include speed of deployment flexibility agility automation minimize downtime and normalization of the underlying hardware in this presentation I’ll review some of the reference architectures and deployment details of VMware NSX running on top of a complete Dell infrastructure I’ll also show a live demo of VMware NSX there’s two versions of nsx nsx for vSphere only environments and nsx for multi hypervisor environments in this presentation and demo we’re going to focus only on nsx for vSphere environments Dell networking hardware will provide the physical infrastructure and network underlay Reis fear ESX I will be the hyper are installed on the physical servers the NSX version will be vSphere only version or NSX six oh the cloud management platform can be VMware vCloud automation Center or any cloud management system leveraging the NSX REST API the hardware NSX l2 gateway is currently only supported with NSX for multi hypervisor environments a hardware NSX l2 gateway such as the Dow s 6000 switch allows for encapsulation and decapsulation of the VX LAN header within the ASIC in the hardware for nsx vSphere version this end cap d cap of the VX LAN header is done via kernel level module allowing for close to line rate performance Before we jump into the demo let’s take a brief look at the NSX components as mentioned prior to physical underlay is composed of Dell networking switches the data plane is composed of the ESXi hose with the virtual distributed switch with kernel level modules installed how this works is the nsx manager is installed first as a virtual appliance on ESXi host once installed we can access the GI VI web web browser and link it to a vCenter server via the vCenter IP address the nsx manager plugin is an automatically installed into v center and can be accessed via the vSphere web client next the nsx manager plug-in can be used from the web client to install the kernel level modules like VX land distributed logical router and distributed firewall into the VDS and onto each host nsx control cluster of the least three controller virtual appliances compose the control plane the controllers are also installed via the nsx manager plugin with three controllers the loss of one controller will still allow the control plane to function without issue even if the entire control clusters loss data will keep forwarding because data never forwards to the controllers however network updates can’t be made until at least one controller comes back online the nsx manager and control cluster both sit on the management network as mentioned prior the nsx managers a virtual appliance that sits on the management network from the nsx manager all the other components of the virtual environment can be installed the nsx manager also exposes the nsx REST API which can be used by cloud management

platform to simplify management please see the network virtualization with Dell infrastructure and VMware NSX reference architecture document for more details on Dell networking underlay topologies in this demo topology I would completely convert architecture with the l2 underlay leveraging data center bridging and providing a lossless Ethernet fabric the Dell MXL blade switches act as top-of-rack or access switches for blade servers well it does 48 tends act as top of rack for the rack servers virtual link trunking or Dells VLT technology is used to provide active active links and multipathing from the MX cells and as 48 tends to the core or spine as 6000 switches VMware active active teaming options are used from each ESXi host to the top of rack or axis switch a few infrastructure VLANs are still needed for management storage promotion via clan or transport and edge traffic the Gateway for these infrastructure VLANs can be provided via VRP or proxy ARP in order to achieve gateway redundancy or high availability in this setup the gateways are implemented on the deltas 6000 spine switches Vi proxy ARP all servers have 210 gig ports carrying converged traffic data center bridging provides for lossless Ethernet fabric with pause per priority or class of service the del EqualLogic is because the array supports DCB and is attached centrally to the s 6000 spine switches in this setup you can see there’s a separate compute management and edge cluster each with their own respective VDS switch this is not a requirement but was implemented this way so only the required configuration or nsx components are installed on each host nsx components are installed at the cluster and VDS level so for example VX LAN is configured at the VDS level and defines a transport VLAN for the VX line traffic however none of these management hosts need to have the VX ion configuration or transport VLAN for this reason and other similar examples I chose to implement a separate cluster and VDS which for compute management and edge this management cluster here is composed of 4 rack servers there are three nsx controllers and each controller resides on a different physical server this management cluster is also where I have the V Center virtual appliance nsx manager virtual appliance and my DNS server there are two edge servers and the edge cluster for high availability and this is where the edge appliances like the perimeter edge and DLR control VM which I’ll discuss in more detail during the demo reside so with that let’s go ahead and jump to the demo first thing I’m going to do is go to my nsx manager and hook into the V Center first thing I got to do is come to this manager vCenter registration now remember this is just a virtual appliance I installed on my ESXi host in my management cluster so I come here to the manager vCenter registration and you can see I’ve already set this up but I just you just put your IP address here where it says vCenter server Alixe your vCenter server it’s a one-to-one relationship and then the nsx manager plugin gets installed on that V Center that’s the most critical aspect here so let’s go ahead and just jump to there’s some other things in here you can look at for example here you can set up an NTP server a syslog server and some other features here but I’m just going to go ahead and skip right to the NSX web client and access the vCenter so we can see the nsx manager plugin okay so here this network in Security tab this is new this was not there before or this isn’t there traditionally in vCenter right so this is the nsx manager plugin now Before we jump to that let me just show you real quick the hosts and clusters so here’s my just like in my diagram here’s my edge cluster management cluster and this production side a and production site B

are basically my compute clusters I just made two separate clusters to show that I can create logical switches over more than one cluster and if I look at my VDS switches you’ll see again just like the diagram I have a separate video switch for compute edge and management now if you look at my compute VDS you can see I have those infrastructure VLANs I mentioned before VLAN 201 for my storage wheel and for one for my cranes port or VX Lane VLAN 301 for my V motion and then all these virtual wires here are actually logical switches that have been graded now explain that in more detail later but here I have two storage port groups because I’m multipathing remember I have 210 gig ports shown here so I have a separate port group each one is linked to a separate separate uplink so it’s multipathing over those two ten gig ports for my ice cozy traffic so with that let me go ahead and go back to the networking security tab now we’ve already linked the nsx manager to the V Center and installed the plug-in that way right so that’s what we’re seeing here so now the next thing I would need to do is go to my installation here go under management and this is where I would install the controllers and you can see I’ve already installed three controllers here and again each controller is on a separate server in that management cluster sitting on the management network you can see here you can just click plus and add another controller but I’ve already installed three here so once you’ve installed the controllers the next thing you do is go to the host preparation tab and in the host preparation tab this is where you’re going to install those NSX bits like the distributor logical router the VX land distributed firewall there’s kernel level modules within the VDS so again as I mentioned earlier you install the NSX components at the cluster level here and you can see here it’s already been installed the the bits have been installed and the firewall here is enabled firewall distributed firewall it’s been installed in the vehicle and bits have also been installed and you can see I’ve not installed that on the management cluster as I mentioned before the manage my management hosts do not need any VX LAN modules or any NSX components because it’s just minus X manager my V Center and the controllers they’re not going to be part of any logical switches or behave or need any of that VX line configuration so next I go to my law logical network preparation and now I have to set up what’s called a transport zone the transport zone basically defines the boundaries of your logical switch so a logical switch can never span a logical switch can never spend on a host that’s not part of the transport zone so what I would do here is just click plus and I would select all the clusters I want to be part of that transport zone I would name my transport down and then I would select the control plane mode I won’t go into the control plane mode details there but you can check out the white paper the del VMware NSX architect reference architecture white paper on delcom and on VMware comm where I explain more of the multicast unicast and hybrid control plane mode but basically a VX LAN the VX and protocol required even before NSX required multicast on the physical network so you still have that mode here available multicast mode but you can also use what’s called unicast mode and what happens in unicast mode is it takes away that requirement to have multicast on your physical network so basically the bump traffic or broadcast unknown unicast or multicast traffic is replicated by the hosts so the hosts sending out the [ __ ] replicates that broadcast unknown unicast or multicast traffic and it knows what other hosts are part of that logical switch so it’ll send a unicast out to the respective hosts that are part of that logical switch that need to receive that traffic so with this proprietary kind of implementation actually a very smart way of doing it the VMware is done they’ve taken away that extra requirement that you need that multicast implement a implemented on your physical underlay switches so in unicast mode you no longer have that requirement where you need multicast enabled on the physical underlay switches okay so we’re going to go ahead

and cancel and you know because I’ve already set that up here so once all of that is done you can actually go ahead and start creating your logical switches so here you can see I’ve already created five logical switches and now would be probably a good time to show you the logical design I’ve shown you the physical the physical design here let me show you what the logical design looks like so this is the logical design of my network so here’s the five logical switches I just showed one two three four five they’re identified by what’s called the V and I or a virtual network identifier so let me show you how that logical switch was actually created so all you have to do is come here hit the plus sign and you basically just type in the name of the logical switch and select the transport zone we already created this transport zone and click OK and it creates another logical switch and then you can add VMs to that logical switch just by clicking this actually not that but this symbol right here add virtual machine and if I click on of the web tier and even some these other tiers you can already see there’s some virtual machines attached there here for example you see the web VM now whenever you create a logical switch as I showed here there’s five logical switches when you create a logical switch what’s actually happening behind the scene as you’re creating a port group so if I go back to my VDS switch I can actually show you the port groups that are created so I’ll go to my compute cluster remember I showed you those logical wires before so I created five logical switches so you should see five logical wires so as I scroll down here you can see one two three four five exactly five logical wires each representing a different logical switch your logical network and you can see in some of these I’ve already added VMs to some of the logical switches so once I add a VM to the logical switch I’m basically I’m basically abstracting away or decoupling the VM from the underlying hardware it’s no longer attached to network services on the physical network now you’re attaching the VM to network services on the logical network because you’re connecting the VM to the logical switch now going back to my networking security tab I can also look at the nsx edge appliances that i’ve installed so here I have a perimeter perimeter edge or perimeter gateway also known as the nsx services gateway and here it’s residing on one of my edge servers right I mentioned I have two edge servers here and the perimeter educate wave resides on the edge server but you can install it in high availability mode where I’ll install an active and standby perimeter edge if one fails over fell over to the other one an nsx 6.1 though I actually have the ability to do active active right now in six so it’s active standby so you have you can solve two edge servers for high availability and now install one active perimeter edge on one server and a standby another and six one you’ll be able to do active active and it also has what’s called the distributed logical control VM which is also installed in the same way active standby and if it fails it fails over to the other other one for high availability so your perimeter edge or nsx services gateway peers with your external networks and communicates those external networks to your DLR control VM VI the transit network and this transit network is just the logical switch we have set up so it peers with the external Network and also peers with the DLR the deal our control vm learns the external networks and then pushes that down to the kernel level modules the DLR kernel level modules so you can think of the DLR control vm is basically a management VM for those kernel level modules okay the DLR control VM is also where the l2 bridge functionality can be configured so you go to the DLR control

VM and configure your l2 bridge to allow for bridging between logical networks and physical networks so here’s a physical server here on VLAN 31 so I can actually bridge between one of the logical networks and physical networks using the l2 Bridge functionality which is also implemented within the kernel module but you actually configure it and set it up to the DLR control VM the perimeter edge also as of you know these edge capabilities such as edge firewall knotting VPN those network services that you would expect at the at the edge but a main thing to to recognize here is that the perimeter edge is a VM whereas the DLR is a kernel level module so this the DLR is providing routing at the within the kernel so for example if I want to route between my web tier and my app tier I never even have to leave the host it comes up to the kernel level module and it routes to the app tier or the DB tier I can also connect it to my perimeter edge instead of the DLR but then it would go all the way out to my edge server the edge server will have a VM sitting there sonic kernel level module and then I’ll route it to my other logical switch so with that let’s go ahead and head back over to the vSphere web client and you can see I’m in the perimeter edge or perimeter or nsx services gateway so here’s the firewall DHCP not routing load bouncer VPN again those network services that you would expect that the edge and this is actually the distributed router logical router control VM if I go in here you can see here I can also configure the distributed firewall the distributed routing and also the l2 bridging functionality that I was mentioning here you can see I’ve set up the l2 bridge if I go to my diagram is right here so I’ve actually bridged this VLAN 31 with one of the other logical switches this bridge app tier so that this VM on this projector can communicate to my physical switch so if I go here and I just click Edit you can see I selected a logical switch bridged up here and I’m saying Bridgette to my distributed virtual port group bridge edge which is basically that port group has VLAN 31 assigned to it VLAN 31 because I’m communicating to this host on VLAN 31 so with that let’s go ahead and go back to the vSphere web client and do an interesting demo first thing I want to do is go to my perimeter edge and delete one of the firewall rules I’d put in earlier okay this is how easy it is to delete the firewall rule you just select it click X and then hit publish and what I want to do in this demo is I’m going to go on my I’m going to console onto my VM on this web logical switch which is on a subnet 172 16.1 and I want to ping my app VM on the App logical switch so what’s going to happen is it’s going to go up to this it’s going to go up to this dlr stay within the host and this DLR kernel-level module is going to route it to my app tier so let’s go ahead and see that work go on my web tier look at my virtual machines and console onto the sky okay so again I’m on this guy is that we IP of 172 16.1 dot one I’m going to parent ping 172 16.2 dot one which is the app VM and we can see we can reach it via the DLR now if we do a trace route on that we should see only a one hop because it’s a distributed route distributed logical router so we should

only see one hop away so let’s give that a shot and see if we get what we expect and there it is again it is what we expected because it’s a distributed logical router we only expect to get a one hop on a traceroute rate so it’s just going to go from the web VM to the distribute logical router to the app logical switch in the app VM so now what we’re going to do is I’ve actually gone ahead and I’ll already move to this web VM or this web logical switch to the perimeter edge so I had it connected to the distributed logical router before I’ve disconnected it and now I’ve connected it to the perimeter edge so by connecting it to the perimeter edge now I have all these external networks that can access my web VM and what I’m going to do is create a firewall on my perimeter edge so I can no longer ping between my web and app VM just to show how I can create kind of a mini DMZ kind of kind of thing so here I’m going to go to my perimeter edge and what I want to do is access the firewall and you can see how the diverse options are for the source I’m going to select the logical switch basically I’m saying traffic from this logical switch which is the web logical switch is the source and for the destination I’m going to select the app logical switch so I’m basically saying blog traffic from the web logical switch to the app logical switching I just have to come here and select deny and then publish that rule now you saw we were just able to ping the app VM from the web VM using the distributed logical router and then now what we did is we we simply moved the web logical switch to the perimeter edge and now we’re going to use a perimeter edge firewall to block the web VM from communicating to the app VM and we can see this rule is now published so we should no longer be able to ping that VM and you can see indeed we cannot now I can just simply come back here and delete it and then publish that and again I should be able to ping it if I go back and look at my logical network diagram as I mentioned before my perimeter edge is peering with my external networks so I should be learning the 180 190 and 200 networks let’s go to my perimeter edge VM and confirm that I’m actually learning there’s networks so I’m going to go back out to hosts and clusters and I’m going to go under the edge cluster and there’s my perimeter edge VM so I’m just going to go ahead and console in and the syntax is going to be very similar to what you’re seeing on a standard switch or router show IP route OSPF and as you can see I am learning that those 180 190 and 200 networks so that’s it for this presentation and demo I hope it’s been informative for you and for more info don’t forget to check out the dell and vmware nsx reference architectures on dell comm