Locked Out of vCenter 5.1

Recently, I was called in to help a client out with a vCenter 5.1 install and came across the somewhat common issue of being locked out of vCenter (which is most common after the upgrade process). After some investigation, it appeared the proper Identity Sources were configured and SSO, in general, looked okay. After scratching our heads a bit, I decided to take a look inside the vCenter DB and verify account/group access. Since this was a clean installation, and not an upgrade, the only account in the vCenter DB was the one specified during the installation wizard. This was a SQL DB, so the table where this access can be found is in the “VMW.VPX_ACCESS” table, within the vCenter DB.

Note: If you are going to attempt this procedure, make sure you have a good/valid backup of the entire DB that you can restore.

To verify/modify access:

1. Stop all vCenter Services
2. Use SQL Management Studio to connect to the DB
3. Expand the vCenter DB (in my case, the name is ‘VCDB’)
4. Expand the Tables and right click on the “VMW.VPX_ACCESS” table; select “Edit Top 200 Rows”.
5. You should see a single row (if this is a new install) with the group/account details that you setup as part of the install wizard, in the “PRINCIPAL” column.

VPX_ACCESS_1

6. Make any necessary changes to the account details and close the table
7. Restart vCenter services and see if access has been restored

In this particular scenario, it was found that the client entered the incorrect details during the install wizard, which is why no one was able to access vCenter.

Datastore Heartbeat Setup Inconsistencies – 5.1

After an initial configuration that I was doing on a new cluster setup I noticed some inconsistencies in the options and wording between the web client and the vSphere client [fat client] in 5.1. Now, I have noticed little differences, here and there, between the two, in other areas, but those differences were small enough to automatically pickup what options mapped to one another between the web and fat client. However, when I was configuring datastore heartbeating, I noticed that the wording is much different in comparison; I had to read over it a few times and do a little testing to map the settings out 100%.

Consider the following Visio depiction of my test environment:


vSphereDemo01

What I found was that despite HA being configured and managed at the cluster level, when configuring datastore heartbeating on Cluster-B using the web client, it reported the total number of hosts in the datacenter that had the same datastores mounted; ie it reported 4 hosts since the hosts in Cluster-A also have datastores DS02 & DS03 mounted. In contrast, the fat client reported the ‘proper’ number of hosts in the cluster, which is 2.

I can’t speak to what the negative implications are, if any, but it’s something I felt was worth noting when configuring from the web client.

In regards to the wording being different, see the screen snip below that shows the two settings panes and the wording differences between the two sets of options. This screen snip also shows the differences in the number of hosts mounting the datastores.

DSHeartBeatSnip

VMware vSphere: Optimize and Scale – Day 3

OaS

Day 3 went well and at this point, we are really cruising through the material. The instructor did mention that due to all of the students having a fairly advanced degree of previous knowledge, that we were moving through the material faster than most classes. Moreover, because of this, we had a more time to spend on conversations that may not have  necessarily been a part of the course material such as real scenarios and challenges that we have encountered, which was a benefit for everyone.

Module 6: Storage Scalability: (Cont) Lesson 2 was about Storage I/O Control (SIOC) and we basically went over how to enable and configure it. Lesson 3 was Datastore Clusters and Storage DRS. We reviewed the relationship of host cluster to datastore cluster, the initial disk placement of guests when they are created, clones or migrated, the datastore correlation detector, configuring migration thresholds and sDRS affinity rules. The last section was a bit longer and tied everything together and reviewed how sDRS and SIOC interact and compliment each other. This was a good lesson, and really followed the model for most of the modules and lessons, thus far; your experience level really dictates how much you gained. There was some good general info that was well worth noting if you are on 4.x (or earlier) and not familiar with some of the enhancements that 5.1 brings.

Module 7: Storage Optimization: Lesson 1 was Storage Virtualization Concepts. In this lesson we reviewed some basics around storage technologies and protocols that were familiar to everyone. One thing we did notice is that the material in this module was fairly heavy in Fiber Channel. We both use and are familiar with NFS and wished they went into a little more detail about best practices dealing with NFS. What they did review for both were topics including, SAN configuration, performance impact of queueing on the storage array, LUN queue depth, what affects VMFS performance, SCSI reservations, VMFS vs. RDM and virtual disk types. Lesson 2 dealt with monitoring of storage activity and we had a fairly good lab which is described below. This was fairly straight forward and we just used the vSphere performance charts for monitoring. Lesson 3 drilled in to using the vMA/CLI to manage storage. We reviewed how to examine LUNs, manage storage paths, NAS/iSCSI storage, LUN masking, various vmkfstools commands and the vscsistats command. Lesson 4 was troubleshooting storage performance problems. We reviewed basic troubleshooting for flow and talked about overloaded storage and what the causes are. There were a few case examples that were scenario based and after each example we went in and talked about what the issues and resolutions for each could be. In summary, this module was excellent. It reviewed what most people look at when troubleshooting storage performance and took it a step further into really understanding the storage I/O stack and what kind of impact it takes on other components and how to quickly identify what may be going on.

Module 8: CPU Optimization: This module was started on Day 3 and finished on Day 4; you can find the summary there.

Lab: Command-Line Storage Management: In this lab we used a VM to generate storage I/O and then used the vMA to run various commands against out host. The main commands used were vmkfstools and vscsistats. We used vmkfstools to practice managing volumes and virtual disks. Then, we used vscsistats to generate a histogram to display disk latency. This was a pretty good lab and we got into some commands that we have not used yet, especially the histogram displays that you can generate from vscsistats.

Lab: Monitoring Storage Performance: In this lab we used a CentOS guest to run various scripts that generated various types of I/O and then practiced monitoring and identifying the storage performance using vSphere performance charts on both a local disk as well as a remote disk. There were 4 test cases that were meant to measure the following I/O profile; 1. sequential writes to remote disk; 2. random writes to remote disk; 3. random reads to remote disk; 4. random reads to local disk. This was a good lab that really had you looking more at realtime monitoring and identification of storage I/O performance. We were also asked to write down performance counters and then compare them at the end. The result was a general understanding of where and how to identify performance issues but student results were not that consistant.

 

VMware vSphere: Optimize and Scale – Day 2

OaS

Day two of the course turned out to be a pretty interesting day and we made it through the modules and labs listed below. So far, each student is working 99% of the time through their own ESXi host. The discussions and interactions among all 6 students is proving to be an added benefit.

Module 4: Network Scalability
This module dealt with the inner workings of the VDS. It was broken up into a few lessons that focused more on some of the configurable settings including Network I/O Control/User Defined Resource Pools, VLANs and PVLANS, Health Check, NetFlow, Port Mirroring, and backup and restore of the VDS. If you work with the VDS on a fairly regular basis, a lot of the material ended up being a good refresher, although there were some good detail on the new features that the 5.1 VDS offers.

Module 5: Network Optimization: The first lesson reviewed virtualized networking concepts. Some of what was reviewed was Network I/O Overhead, the vmxnet network adapter (as well as the other available network adapters) and the benefits of implementing and using it, TCP checksum offload, Jumbo Frames, DMA, NetQueue, SR-IOV (Single Root I/O Virtualization) and SplitRX mode. Lesson two dealt with monitoring network I/O activity. There was some brief review about how to use the built in charts in the vSphere client for monitoring traffic but the better material circled around using esxtop. Lesson three reviewed CLI Network Management. The main tools used in this lab were the vMA and esxtop. The material was good but the commands reviewed and ultimately used in the labs, were long and would only be useful if the vMA was your only way to manage your environment (for whatever reason) or if you were scripting changes to many environments.

Module 6: Storage Scalability: The first lesson went over storage APIs and Profile Driven storage. The section on APIs was short and went over VAAI and VASA. The profile driven storage sections went over configuration of storage profiles and how to categorize and apply profiles to your environment. his is a feature I first heard about at VMworld and I can see where it might have its uses. In summary, for those of you that don’t know, you basically build a framework/profile in vSphere and classify your storage with different tiers, I.E. SSD might be Gold, SAS might be silver and SATA might be bronze, at which point, you can then you classify your VM’s and run compliance checks. I can definitely see where VMware may add functionality to add some type of automation around profile enforcement in the future, but as of today we don’t personally have use for it. There was some detailed review of N_Port ID recommendations, configuring software iSCSI port binding, VMFS resignaturing, and the VMware Multipathing Plugin (MPP). Lesson two covered on Day III, so that will be summarized on that post.

Below are some of the notes from the labs that we completed on Day II which reflected the modules reviewed above.

Lab: vSphere Distributed Switch: This lab has you doing a variety of simple tasks through the GUI, such as creating a VDS, creating port groups, and migrating VM’s to a VDS from a standard Vswitch. Its essentially an introduction to the VDS and the coming labs.

Lab: Port mirroring: This lab was pretty interesting and I am sure its pretty easy to guess what it covers. In 5.1, your only able to enable port mirroring from the web interface, but you’re able to do a lot of neat things as far as tapping into the traffic of VM’s communicating out of the VDS; you can do one-to-one tapping, one-to-many, and few other interesting things; overall it was a great lab.

Lab: Monitoring network activity: This lab has you digging into to network performance with things like ESXTOP and VMware performance charts through the vSphere Client. There was a lot to be learned here and I found the lecture for this section exceptional. I definitely learned some good stuff, of which, one of the most important things the instructor stressed was to use the vmxnet3 NIC adapter whenever possible.

Lab: Command line network management: This lab has you doing a variety of things from the vMA via CLI.  I see tons of potential for scripting tools, unfortunately most of the environments we deal with, although highly critical, aren’t large enough to get max value from the time we could spend developing scripts. However, what it did show, was this is the preferred way of executing any CLI commands against the hosts due to a more secure execution environment since you do not need to enable SSH for commands allowed and executed from the vMA.

Lab: Manage Datastore clusters: In this lab you basically create a datastore cluster and enable storage DRS, move VM’s to the cluster, enter the cluster into evacuation mode, manually run sDRS and apply recommendations. These recommendations can be based on either space or performance, and work a little with storage alarms. The lecture with this model was ok, but didn’t really delve deep into storage DRS. I took the Install configure manage class on 4.1, so I am sure they probably cover this feature better in 5.1 ICM.

VMware vSphere: Optimize and Scale – Day 1

OaS

This, as well as other forthcoming posts, intend to highlight some of the overhead material covered each day in the VMware Optimize & Scale 5.1 course offered through Global Knowledge. Lets jump right in.

Lab Environment: The course provides each pair of students with a single pod that consists of two vCenter servers and two ESXi hosts. Not very far into the labs, the pod is condensed down to a single vCenter server. The ESXi hosts are all virtual and seem to run pretty good [inception model]. The lab environments are available all hours during the course and are accessible from the internet. So far it seems that each student completes nearly all lab activities independent of their lab partner, this may change later in the course as the labs become more complex. The lab environments also came pre-populated with Vm’s with scripts to generate I/O so your able to put the environment under load, this allows the student a more realistic experience for performance monitoring and troubleshooting labs.

Module 1: We reviewed the course objectives and had a bit of open discussion about what each of us wanted to get out of the course. The class size is small [6 people] and the instructor did a pretty good job individually engaging everyone. The course is really geared towards the VCAP5-DCA certification but does give you eligibility for the VCP5-DV certification.

Module 2: VMware Management Resources: Module 2 started off with deploying and configuring the vMA appliance [virtual management assistant]. After it was deployed we went into the authentication options/setup and then explored some of the syntax and supported commands, which are:
> esxcli
> resxtop
> svmotion
> vicfg-*
> esxcfg-* [deprecated & not reviewed]
> vifs
> vihostupdate
> vmkfstools
> vmware-cmd

There are way too many options within each command set to cram in, but they did go into each and review the most commonly used. The lab for the vMA was good; it went through and had you execute various commands on your lab host.

Module 3: Performance in a Virtualized Environment: This module was pretty heavy in lecture and went fairly deep into the workings of the VMM [virtual machine monitor]. The back half of the module and the associated lab were all based around esxtop/resxtop. In the exercise, they had you perform various troubleshooting tasks using esxtop including exporting performance data using esxtop batch mode, which kicked out a .csv file that we then imported into perfmon and did perf analysis.

Module 4: Network Scalability: We didn’t get all the way though this one on the first day so stay tuned for the Day 2 post which will sum up this module.

Slow Mouse in Server 2008 VM Console

A while back I created a note about how to fix the slow mouse issue that sometimes happens even after installing VMware tools on a Server 2008 guest. The process is as follows:

1. Make sure VMware tools is up to date
2. You may need to update the VM hardware versions if this fix does not work.
3. On the guest, open up the device manager and expand the Display adapters.
4. Right click on Standard VGA Graphics Adapter and select Properties
5. Click the Driver tab.
6. Click the Update Driver button
7. Select Let me pick from a list of device drivers on my computer
8. Click have Disk… button
9. Click browse and go to:
C:Program FilesCommon FilesVMwareDriverswddm_video
10. Select vm3d and click open.
11. Click OK on the Install From Disk window
12. VMware SVGA 3D (Microsoft Corporation – WDDM) should be selected
13. Click Next and then Close
14. A reboot is required so select Yes.
15. You may also need to adjust your display acceleration to “full”.

Backing Up Host Configuration using PowerCLI

An often overlooked and less prioritized thought in vSphere environments is backing up the actual host configuration. I figured it was worth a quick post, given the fact that it’s a fairly simple process via PowerCLI. This command should be the same for all versions of vSphere/PowerCLI, but I have only tested on 4.1-5.1.

Aside: This is just the basic command; there are plenty of other ways to automate this or use it in conjunction with another PowerCLI script; I’ll leave that up to you.

1. Fire up PowerCLI
2. Connect to your vCenter server:
3. Expand the screen shot below to view the full command. The output will be a gzipped tar file in the location that you specify.

MBR Align VM – Enabling SSH

I got real sick and tired of the left click getting stuck in my MBR Align guest, midway through a vmdk alignment. While I did see some fixes, all I really needed was SSH enabled so I could run the necessary utilities from the command line.

To enable SSH on the MBR Align Suse VM, do the following:

1. Open the shell
2. Enter the below command:

Thats it; done.

Nexus 1000v Max-Ports

I have been meaning to post something about this for a little while. When deploying the Nexus 1000v, don’t forget to set the ‘Max-Ports’ setting for each of your port-profiles. By default, the 1000v sets each port-profile up with only 32 ports. This presents a problem, for example, when you try to deploy your 33rd guest and you get an error that says that you are out of virtual ports. Not to worry, the fix easy and non-disruptive to anything already running in your environment.

The syntax order looks like this:

You will see a reconfiguration task kick off in your vSphere client, that’s about it; the change is instant. See below for a screen snip example.