Day two of the course turned out to be a pretty interesting day and we made it through the modules and labs listed below. So far, each student is working 99% of the time through their own ESXi host. The discussions and interactions among all 6 students is proving to be an added benefit.
Module 4: Network Scalability
This module dealt with the inner workings of the VDS. It was broken up into a few lessons that focused more on some of the configurable settings including Network I/O Control/User Defined Resource Pools, VLANs and PVLANS, Health Check, NetFlow, Port Mirroring, and backup and restore of the VDS. If you work with the VDS on a fairly regular basis, a lot of the material ended up being a good refresher, although there were some good detail on the new features that the 5.1 VDS offers.
Module 5: Network Optimization: The first lesson reviewed virtualized networking concepts. Some of what was reviewed was Network I/O Overhead, the vmxnet network adapter (as well as the other available network adapters) and the benefits of implementing and using it, TCP checksum offload, Jumbo Frames, DMA, NetQueue, SR-IOV (Single Root I/O Virtualization) and SplitRX mode. Lesson two dealt with monitoring network I/O activity. There was some brief review about how to use the built in charts in the vSphere client for monitoring traffic but the better material circled around using esxtop. Lesson three reviewed CLI Network Management. The main tools used in this lab were the vMA and esxtop. The material was good but the commands reviewed and ultimately used in the labs, were long and would only be useful if the vMA was your only way to manage your environment (for whatever reason) or if you were scripting changes to many environments.
Module 6: Storage Scalability: The first lesson went over storage APIs and Profile Driven storage. The section on APIs was short and went over VAAI and VASA. The profile driven storage sections went over configuration of storage profiles and how to categorize and apply profiles to your environment. his is a feature I first heard about at VMworld and I can see where it might have its uses. In summary, for those of you that don’t know, you basically build a framework/profile in vSphere and classify your storage with different tiers, I.E. SSD might be Gold, SAS might be silver and SATA might be bronze, at which point, you can then you classify your VM’s and run compliance checks. I can definitely see where VMware may add functionality to add some type of automation around profile enforcement in the future, but as of today we don’t personally have use for it. There was some detailed review of N_Port ID recommendations, configuring software iSCSI port binding, VMFS resignaturing, and the VMware Multipathing Plugin (MPP). Lesson two covered on Day III, so that will be summarized on that post.
Below are some of the notes from the labs that we completed on Day II which reflected the modules reviewed above.
Lab: vSphere Distributed Switch: This lab has you doing a variety of simple tasks through the GUI, such as creating a VDS, creating port groups, and migrating VM’s to a VDS from a standard Vswitch. Its essentially an introduction to the VDS and the coming labs.
Lab: Port mirroring: This lab was pretty interesting and I am sure its pretty easy to guess what it covers. In 5.1, your only able to enable port mirroring from the web interface, but you’re able to do a lot of neat things as far as tapping into the traffic of VM’s communicating out of the VDS; you can do one-to-one tapping, one-to-many, and few other interesting things; overall it was a great lab.
Lab: Monitoring network activity: This lab has you digging into to network performance with things like ESXTOP and VMware performance charts through the vSphere Client. There was a lot to be learned here and I found the lecture for this section exceptional. I definitely learned some good stuff, of which, one of the most important things the instructor stressed was to use the vmxnet3 NIC adapter whenever possible.
Lab: Command line network management: This lab has you doing a variety of things from the vMA via CLI. I see tons of potential for scripting tools, unfortunately most of the environments we deal with, although highly critical, aren’t large enough to get max value from the time we could spend developing scripts. However, what it did show, was this is the preferred way of executing any CLI commands against the hosts due to a more secure execution environment since you do not need to enable SSH for commands allowed and executed from the vMA.
Lab: Manage Datastore clusters: In this lab you basically create a datastore cluster and enable storage DRS, move VM’s to the cluster, enter the cluster into evacuation mode, manually run sDRS and apply recommendations. These recommendations can be based on either space or performance, and work a little with storage alarms. The lecture with this model was ok, but didn’t really delve deep into storage DRS. I took the Install configure manage class on 4.1, so I am sure they probably cover this feature better in 5.1 ICM.