‘WorkLog’ PowerShell Module

As part of a new initiative, of sorts, I wanted a way to record daily accomplishments, which is something that I have thought about doing for quite some time, but never got enough motivation to actually do anything about it. That said, I decided to revisit, take action and come up with some requirements on what a workable solution would look like (for me):

  • It needs to be easy to record entries (if it’s hard or cumbersome, it won’t have sustainability)
  • It needs to fit into my daily workflow
  • The format needs to be somewhat open/easy to move between different platforms (Mainly Windows & Mac OS)
  • If possible, pick a solution that can sharpen a skill-set, in the process

Given each of those, my final solution ended up being quite simple: GitHub and GitHub Flavored Markdown (GFM) files. This was ideal, because it really meets all of the requirements, above.

  • It needs to be easy to record entries
    • Use Sublime Text to create/edit GFM files for each day.
  • It needs to fit into my daily workflow
    • I already use GitHub on a daily basis for source control of various scripts, projects, modules, etc.
  • The format needs to be somewhat open/easy to move between different platforms
    • Again, I already use GitHub as a central source control for all of my code repositories and have that synced between my Windows and Mac systems
  • If possible, pick a solution that can sharpen a skill-set, in the process
    • While familiar with GitHub and GFM files, I can always use some extra practice

Now, after I started building out how I wanted things setup and started creating entries into a log file, I realized that flipping over to a text editor to make entries was working fine, but I wanted deeper integration; why not use PowerShell!

What resulted was a PowerShell module comprised of 3 functions:

  • New-WorkLog
  • Add-WorkLog
  • Get-WorkLog

The main functionality I needed out of this module was:

  • Create a new Work Log file for each day
  • Standard file name format
  • Generate pre-formatted file layout (Title, fully formatted date and an initial bullet point)
  • Ability to add entries right from the PowerShell console
  • Ability to view the contents of the Work Log for the current day, from within the console
  • Basic support for bullet point indentation
  • Basic fail-safes so that files wouldn’t get overwritten.

Before I continue, you can get more information as well as download, clone or fork the Module from GitHub

Lets take a closer look at each function.



  • Function header

    • Pretty standard material getting things started. The main item worth noting is that I hard coded the working directory and assigned it to the -Path parameter.
  • BEGIN Block

    • I need to get some date details and convert them to strings so that I can create a custom, standard file name as well as the date detail that will get stored in the actual file.
      • $now -Get and assign the current date information in the $now variable, so that we can use to construct our custom file format.
      • $dateFormat – Call the .tostring() method on the $now variable and format the output so it will look like ‘20150205’
      • $dateDay – Call the .tostring() method but specify ‘dddd’ in order to get the day of the week in long format, like ‘Thursday’
      • $fileName – Build out the actual file name string. I’m adding the value stored in $dateFormat; then add an underscore ‘_’; then add the value stored in $dateDay; then another underscore ‘_’; finally, add the last piece ‘WL.md’ (‘WL’ just stands for Work Log)
      • $filePath – Join the value stored in $Path with the value stored in $fileName, and the end result is the full path of the daily Work log file, which looks something like:

  • PROCESS Block


    •  First, we create a logical statement that tests for the existence of the log file and if it does not exist, we want to create it, but if it does exist, we want to display a message to the console saying that it already exists. Since I only want to take action if it DOES NOT exist, I start by using that criteria as the first validation via ‘-not’ operator.
    • I then create the desired file and use Write-Output to add the desired detail to the file. As a general comment, the spacing of the text does matter, to a certain degree, when creating these markdown files, since spaces and special characters actually mean something when the markdown files are read/displayed.
    • If the file does exist, use Write-Warning to display some text in the console letting me know that one is already created. I used Write-Warning here because I wanted to avoid clouding up the ‘Output/Success’ data stream (great article by June Blender on PowerShell streams). As a best practice, I ALWAYS try to avoid sending informational messages that are only useful to the user running the cmdlet/script/function, from going down the pipeline; Output from the ‘Write-Output’ cmdlet uses the ‘Output/Success’ stream (stream 1). Write-Host is forbidden and kills kittens, so it should never be used 🙂 (It also sends nothing to any stream, which can very problematic).
    • We don’t do anything in the END {} block so we can move on to the next function



  • Function Header

    • I needed some parameters to deal with the actual message/update to be entered into the Work Log, as well as an -Indent parameter to specify the level of indentation
      • -Message – This is the string data that will actually be appended into the Work Log file. I set the position at ‘0’ so that I can quickly add entries without having to specify ‘-Message’ before typing the message.
      • -Indent – This is the level of desired indentation and is only required if you want to indent the entry. I only want to offer 4 levels of indentation, so I specify a [ValidateSet()] attribute to the parameter with the only values I want to accept (1,2,3,4). These values get passed to a function that actually reads the value and performs the proper indentation spacing when it appends the entry to the Work Log file. More on that in the PROCESS block
  • BEGIN Block

    • I won’t go over the date variables since we reviewed that in the ‘New-WorkLog’ function
    • I created the ‘Add-Indent’ function to handle the indentation spacing that results in a properly formatted, indented, markdown file.
      • I accept the -Indent parameter, if it was supplied, in the -Level parameter of this function
      • I use a basic ‘switch’ statement to define the spacing that gets outputted when the function is called. More on this functionality in the PROCESS block
  • PROCESS Block

    • I start with a conditional statement to check and see if the WorkLog file exists and if it does, then check to see if there was a desired indent level.
    • $indentMessage – Stores the properly indented message that will be written to the Work Log file. It merges the desired indent with the provided -Message string
    • If no indent is desired, it simply appends the message to the current Work Log file
    • If the Work Log file DOES NOT exist, we want to go ahead and create it. The main differences the way the file gets created with ‘Add-WorkLog’ vs. ‘New-WorkLog’ are:
      • There is no asterisk (bullet point) created as part of the initial file create; if the file doesn’t exist, we want our first entry to be the first bullet point.
      • Along the same lines, if we supplied an indent level and the file doesn’t exist, we don’t want to to indent the first entry, so, I just ignore indents if the file has to be created using this function
    • There is nothing in the END {} block so we will move on to the next function



  • Function Header

    • This isn’t any different than the start of the New-WorkLog function, so you can reference that if you have any questions
  • BEGIN Block
    • This is also no different than the New-WorkLog function; you can reference more detail above
  • PROCESS Block

    • This is the most straight forward function, of all the rest. We are just using the Get-Content cmdlet to read in and display the contents of the current work log file.
    • I use the ‘-ReadCount 0’ parameter so that is reads the entire file contents in at one time, instead of line-by-line.
    • If the Work Log file has not been created, display a message to the console
    • There is no END {} block



Below are some example commands and the file/file format that they produce

Output file name

Output file content

End result



Locked Out of vCenter 5.1

Recently, I was called in to help a client out with a vCenter 5.1 install and came across the somewhat common issue of being locked out of vCenter (which is most common after the upgrade process). After some investigation, it appeared the proper Identity Sources were configured and SSO, in general, looked okay. After scratching our heads a bit, I decided to take a look inside the vCenter DB and verify account/group access. Since this was a clean installation, and not an upgrade, the only account in the vCenter DB was the one specified during the installation wizard. This was a SQL DB, so the table where this access can be found is in the “VMW.VPX_ACCESS” table, within the vCenter DB.

Note: If you are going to attempt this procedure, make sure you have a good/valid backup of the entire DB that you can restore.

To verify/modify access:

1. Stop all vCenter Services
2. Use SQL Management Studio to connect to the DB
3. Expand the vCenter DB (in my case, the name is ‘VCDB’)
4. Expand the Tables and right click on the “VMW.VPX_ACCESS” table; select “Edit Top 200 Rows”.
5. You should see a single row (if this is a new install) with the group/account details that you setup as part of the install wizard, in the “PRINCIPAL” column.


6. Make any necessary changes to the account details and close the table
7. Restart vCenter services and see if access has been restored

In this particular scenario, it was found that the client entered the incorrect details during the install wizard, which is why no one was able to access vCenter.

PowerShell HTML Disk Space Report

This is a recent HTML Disk Space report that I created which outputs a generic HTML report that contains server disk/partition space details. I schedule it to run weekly, but obviously you can use as you wish. In short, in reads in a list of servers from a text file, queries WMI for disk space detail, uses some expressions to format and calculate the space and then outputs the results into a report, sorted in ascending order by the percent of free space, per partition. Sample output, below.



Datastore Heartbeat Setup Inconsistencies – 5.1

After an initial configuration that I was doing on a new cluster setup I noticed some inconsistencies in the options and wording between the web client and the vSphere client [fat client] in 5.1. Now, I have noticed little differences, here and there, between the two, in other areas, but those differences were small enough to automatically pickup what options mapped to one another between the web and fat client. However, when I was configuring datastore heartbeating, I noticed that the wording is much different in comparison; I had to read over it a few times and do a little testing to map the settings out 100%.

Consider the following Visio depiction of my test environment:


What I found was that despite HA being configured and managed at the cluster level, when configuring datastore heartbeating on Cluster-B using the web client, it reported the total number of hosts in the datacenter that had the same datastores mounted; ie it reported 4 hosts since the hosts in Cluster-A also have datastores DS02 & DS03 mounted. In contrast, the fat client reported the ‘proper’ number of hosts in the cluster, which is 2.

I can’t speak to what the negative implications are, if any, but it’s something I felt was worth noting when configuring from the web client.

In regards to the wording being different, see the screen snip below that shows the two settings panes and the wording differences between the two sets of options. This screen snip also shows the differences in the number of hosts mounting the datastores.


VMware vSphere: Optimize and Scale – Day 3


Day 3 went well and at this point, we are really cruising through the material. The instructor did mention that due to all of the students having a fairly advanced degree of previous knowledge, that we were moving through the material faster than most classes. Moreover, because of this, we had a more time to spend on conversations that may not have  necessarily been a part of the course material such as real scenarios and challenges that we have encountered, which was a benefit for everyone.

Module 6: Storage Scalability: (Cont) Lesson 2 was about Storage I/O Control (SIOC) and we basically went over how to enable and configure it. Lesson 3 was Datastore Clusters and Storage DRS. We reviewed the relationship of host cluster to datastore cluster, the initial disk placement of guests when they are created, clones or migrated, the datastore correlation detector, configuring migration thresholds and sDRS affinity rules. The last section was a bit longer and tied everything together and reviewed how sDRS and SIOC interact and compliment each other. This was a good lesson, and really followed the model for most of the modules and lessons, thus far; your experience level really dictates how much you gained. There was some good general info that was well worth noting if you are on 4.x (or earlier) and not familiar with some of the enhancements that 5.1 brings.

Module 7: Storage Optimization: Lesson 1 was Storage Virtualization Concepts. In this lesson we reviewed some basics around storage technologies and protocols that were familiar to everyone. One thing we did notice is that the material in this module was fairly heavy in Fiber Channel. We both use and are familiar with NFS and wished they went into a little more detail about best practices dealing with NFS. What they did review for both were topics including, SAN configuration, performance impact of queueing on the storage array, LUN queue depth, what affects VMFS performance, SCSI reservations, VMFS vs. RDM and virtual disk types. Lesson 2 dealt with monitoring of storage activity and we had a fairly good lab which is described below. This was fairly straight forward and we just used the vSphere performance charts for monitoring. Lesson 3 drilled in to using the vMA/CLI to manage storage. We reviewed how to examine LUNs, manage storage paths, NAS/iSCSI storage, LUN masking, various vmkfstools commands and the vscsistats command. Lesson 4 was troubleshooting storage performance problems. We reviewed basic troubleshooting for flow and talked about overloaded storage and what the causes are. There were a few case examples that were scenario based and after each example we went in and talked about what the issues and resolutions for each could be. In summary, this module was excellent. It reviewed what most people look at when troubleshooting storage performance and took it a step further into really understanding the storage I/O stack and what kind of impact it takes on other components and how to quickly identify what may be going on.

Module 8: CPU Optimization: This module was started on Day 3 and finished on Day 4; you can find the summary there.

Lab: Command-Line Storage Management: In this lab we used a VM to generate storage I/O and then used the vMA to run various commands against out host. The main commands used were vmkfstools and vscsistats. We used vmkfstools to practice managing volumes and virtual disks. Then, we used vscsistats to generate a histogram to display disk latency. This was a pretty good lab and we got into some commands that we have not used yet, especially the histogram displays that you can generate from vscsistats.

Lab: Monitoring Storage Performance: In this lab we used a CentOS guest to run various scripts that generated various types of I/O and then practiced monitoring and identifying the storage performance using vSphere performance charts on both a local disk as well as a remote disk. There were 4 test cases that were meant to measure the following I/O profile; 1. sequential writes to remote disk; 2. random writes to remote disk; 3. random reads to remote disk; 4. random reads to local disk. This was a good lab that really had you looking more at realtime monitoring and identification of storage I/O performance. We were also asked to write down performance counters and then compare them at the end. The result was a general understanding of where and how to identify performance issues but student results were not that consistant.


VMware vSphere: Optimize and Scale – Day 2


Day two of the course turned out to be a pretty interesting day and we made it through the modules and labs listed below. So far, each student is working 99% of the time through their own ESXi host. The discussions and interactions among all 6 students is proving to be an added benefit.

Module 4: Network Scalability
This module dealt with the inner workings of the VDS. It was broken up into a few lessons that focused more on some of the configurable settings including Network I/O Control/User Defined Resource Pools, VLANs and PVLANS, Health Check, NetFlow, Port Mirroring, and backup and restore of the VDS. If you work with the VDS on a fairly regular basis, a lot of the material ended up being a good refresher, although there were some good detail on the new features that the 5.1 VDS offers.

Module 5: Network Optimization: The first lesson reviewed virtualized networking concepts. Some of what was reviewed was Network I/O Overhead, the vmxnet network adapter (as well as the other available network adapters) and the benefits of implementing and using it, TCP checksum offload, Jumbo Frames, DMA, NetQueue, SR-IOV (Single Root I/O Virtualization) and SplitRX mode. Lesson two dealt with monitoring network I/O activity. There was some brief review about how to use the built in charts in the vSphere client for monitoring traffic but the better material circled around using esxtop. Lesson three reviewed CLI Network Management. The main tools used in this lab were the vMA and esxtop. The material was good but the commands reviewed and ultimately used in the labs, were long and would only be useful if the vMA was your only way to manage your environment (for whatever reason) or if you were scripting changes to many environments.

Module 6: Storage Scalability: The first lesson went over storage APIs and Profile Driven storage. The section on APIs was short and went over VAAI and VASA. The profile driven storage sections went over configuration of storage profiles and how to categorize and apply profiles to your environment. his is a feature I first heard about at VMworld and I can see where it might have its uses. In summary, for those of you that don’t know, you basically build a framework/profile in vSphere and classify your storage with different tiers, I.E. SSD might be Gold, SAS might be silver and SATA might be bronze, at which point, you can then you classify your VM’s and run compliance checks. I can definitely see where VMware may add functionality to add some type of automation around profile enforcement in the future, but as of today we don’t personally have use for it. There was some detailed review of N_Port ID recommendations, configuring software iSCSI port binding, VMFS resignaturing, and the VMware Multipathing Plugin (MPP). Lesson two covered on Day III, so that will be summarized on that post.

Below are some of the notes from the labs that we completed on Day II which reflected the modules reviewed above.

Lab: vSphere Distributed Switch: This lab has you doing a variety of simple tasks through the GUI, such as creating a VDS, creating port groups, and migrating VM’s to a VDS from a standard Vswitch. Its essentially an introduction to the VDS and the coming labs.

Lab: Port mirroring: This lab was pretty interesting and I am sure its pretty easy to guess what it covers. In 5.1, your only able to enable port mirroring from the web interface, but you’re able to do a lot of neat things as far as tapping into the traffic of VM’s communicating out of the VDS; you can do one-to-one tapping, one-to-many, and few other interesting things; overall it was a great lab.

Lab: Monitoring network activity: This lab has you digging into to network performance with things like ESXTOP and VMware performance charts through the vSphere Client. There was a lot to be learned here and I found the lecture for this section exceptional. I definitely learned some good stuff, of which, one of the most important things the instructor stressed was to use the vmxnet3 NIC adapter whenever possible.

Lab: Command line network management: This lab has you doing a variety of things from the vMA via CLI.  I see tons of potential for scripting tools, unfortunately most of the environments we deal with, although highly critical, aren’t large enough to get max value from the time we could spend developing scripts. However, what it did show, was this is the preferred way of executing any CLI commands against the hosts due to a more secure execution environment since you do not need to enable SSH for commands allowed and executed from the vMA.

Lab: Manage Datastore clusters: In this lab you basically create a datastore cluster and enable storage DRS, move VM’s to the cluster, enter the cluster into evacuation mode, manually run sDRS and apply recommendations. These recommendations can be based on either space or performance, and work a little with storage alarms. The lecture with this model was ok, but didn’t really delve deep into storage DRS. I took the Install configure manage class on 4.1, so I am sure they probably cover this feature better in 5.1 ICM.

VMware vSphere: Optimize and Scale – Day 1


This, as well as other forthcoming posts, intend to highlight some of the overhead material covered each day in the VMware Optimize & Scale 5.1 course offered through Global Knowledge. Lets jump right in.

Lab Environment: The course provides each pair of students with a single pod that consists of two vCenter servers and two ESXi hosts. Not very far into the labs, the pod is condensed down to a single vCenter server. The ESXi hosts are all virtual and seem to run pretty good [inception model]. The lab environments are available all hours during the course and are accessible from the internet. So far it seems that each student completes nearly all lab activities independent of their lab partner, this may change later in the course as the labs become more complex. The lab environments also came pre-populated with Vm’s with scripts to generate I/O so your able to put the environment under load, this allows the student a more realistic experience for performance monitoring and troubleshooting labs.

Module 1: We reviewed the course objectives and had a bit of open discussion about what each of us wanted to get out of the course. The class size is small [6 people] and the instructor did a pretty good job individually engaging everyone. The course is really geared towards the VCAP5-DCA certification but does give you eligibility for the VCP5-DV certification.

Module 2: VMware Management Resources: Module 2 started off with deploying and configuring the vMA appliance [virtual management assistant]. After it was deployed we went into the authentication options/setup and then explored some of the syntax and supported commands, which are:
> esxcli
> resxtop
> svmotion
> vicfg-*
> esxcfg-* [deprecated & not reviewed]
> vifs
> vihostupdate
> vmkfstools
> vmware-cmd

There are way too many options within each command set to cram in, but they did go into each and review the most commonly used. The lab for the vMA was good; it went through and had you execute various commands on your lab host.

Module 3: Performance in a Virtualized Environment: This module was pretty heavy in lecture and went fairly deep into the workings of the VMM [virtual machine monitor]. The back half of the module and the associated lab were all based around esxtop/resxtop. In the exercise, they had you perform various troubleshooting tasks using esxtop including exporting performance data using esxtop batch mode, which kicked out a .csv file that we then imported into perfmon and did perf analysis.

Module 4: Network Scalability: We didn’t get all the way though this one on the first day so stay tuned for the Day 2 post which will sum up this module.


After being away from my blog for sometime, I am dusting off some old drafts and getting them posted up. My apologies for some of the material, if it seems dated, I just need to get it posted and made available. Thanks for tuning in; more interesting material to come.

Slow Mouse in Server 2008 VM Console

A while back I created a note about how to fix the slow mouse issue that sometimes happens even after installing VMware tools on a Server 2008 guest. The process is as follows:

1. Make sure VMware tools is up to date
2. You may need to update the VM hardware versions if this fix does not work.
3. On the guest, open up the device manager and expand the Display adapters.
4. Right click on Standard VGA Graphics Adapter and select Properties
5. Click the Driver tab.
6. Click the Update Driver button
7. Select Let me pick from a list of device drivers on my computer
8. Click have Disk… button
9. Click browse and go to:
C:Program FilesCommon FilesVMwareDriverswddm_video
10. Select vm3d and click open.
11. Click OK on the Install From Disk window
12. VMware SVGA 3D (Microsoft Corporation – WDDM) should be selected
13. Click Next and then Close
14. A reboot is required so select Yes.
15. You may also need to adjust your display acceleration to “full”.

Backing Up Host Configuration using PowerCLI

An often overlooked and less prioritized thought in vSphere environments is backing up the actual host configuration. I figured it was worth a quick post, given the fact that it’s a fairly simple process via PowerCLI. This command should be the same for all versions of vSphere/PowerCLI, but I have only tested on 4.1-5.1.

Aside: This is just the basic command; there are plenty of other ways to automate this or use it in conjunction with another PowerCLI script; I’ll leave that up to you.

1. Fire up PowerCLI
2. Connect to your vCenter server:
3. Expand the screen shot below to view the full command. The output will be a gzipped tar file in the location that you specify.