PowerCLI: Balance LUN paths for a Cluster

As I’ve been managing more and more infrastructures using fibre channel storage, I’ve found that it’s been somewhat difficult to keep the LUN paths to each host balanced. By balanced, I mean that for each LUN to each host, there is a number of paths and I want to make sure that, for example, each LUN 10 to each of the hosts is using path A as the primary and path B as the stand by. LUN 11 uses path B as the primary, LUN 12 back to path A, and so on.

It so happens that I’m using a DMX-4 for storage, and the policy we have is to use a fixed path policy. I realize that Round Robin would make this entire script moot, well, except for making sure that the PSP is correct. I also realize that PowerPath would be the ideal solution for EMC storage, but we don’t use it…that’s a story for another day.

This script is, admittedly, long…longer than I expected it to be. The original inspiration for this script came from Justin Emerson’s very functional and succinct script, however I was not satisfied with the way LUNs were balanced. His script queries the host for LUNs then sorts them by canonical name and round robins the paths based on the number of paths present for the first LUN.

This works well, so long as all the LUNs are present on all the hosts and they all have the same number of paths. I can only presume that he assumes that those cases have already been checked for, and fixed, prior to execution. I wanted to do that all in one script.

Additionally, and it’s rather petty, I wanted the LUNs to be balanced based off their LUN identifier rather than the canonical name…they don’t always follow the same order, and in the case of my hosts with two HBAs (and consequentially, two paths per LUN), I wanted all odd LUNs to use one path for the primary and all even LUNs to use the other. Justin’s script does an excellent job of ensuring that the paths are evenly distributed, as you will end up with the same number on each, but not in the pretty fashion I desired.

Also, thank you to Glenn, who helped me “powershellize” this script…my PowerShell looks and reads like Perl, and therefore doesn’t use a lot of the optimizations that PoSH brings…such as automatic parameter handling and other niceties.

So, without further ado…

Read morePowerCLI: Balance LUN paths for a Cluster

PowerCLI: Show HBA Path Status

When you have a lot of hosts, with a lot of LUNs, it can be difficult to keep abreast of the status of the paths for them. I have encountered issues where a path was unknowingly marked as dead, plus it’s generally a good idea to ensure that your storage paths are actually available.

Consequentially, I searched for a PowerCLI script that would give me a simple report of the status of each of the LUN paths to each of the HBAs on my hosts. I found John Milner’s post to be very helpful, and it gave me exactly the results that I wanted. However, it took forever to execute…almost 30 minutes for just one of my clusters (to be fair, that cluster has 12 hosts with > 100 LUNs and two paths to each).

Using his script as an example, and keeping a good bit of the formatting code, I have modified his script to use views of the host objects and cull the information from there. This makes it significantly faster…what took 28 minutes before now takes about 30 seconds.

Read morePowerCLI: Show HBA Path Status

Perl Toolkit: Check ESX(i) host time

I had an issue recently where a single ESXi host’s clock was incorrect. The administrator had never set the clock initially, so NTP never kept it in sync cause it was too far off to begin.

Since I’ve got a large number of hosts and the idea of clicking to each one through VI Client and checking the configuration tab, I immediately turned to PowerCLI. Naturally, one of Luc‘s scripts was the top search result.

That solved my immediate need to check the hosts, but I also wanted to setup some general monitoring. Since my monitoring infrastructure is compromised, primarily, of a linux Nagios host, that means PowerCLI couldn’t help. So, I did the next best thing and ported Luc’s script to perl.

Below is the result of that porting. It can also be run from vMA for reporting via email or another mechanism.

Read morePerl Toolkit: Check ESX(i) host time

PowerCLI: Why you need this book!

PowerCLI Ref 1st Ed

The VMware vSphere PowerCLI Reference will officially go on sale April 12th, and if your a vSphere administrator this one is a must have. We gathered the collective automation experience of four vEXPERTS and a MVP, and then filtered it through a fifth vEXPERT. The end result was a collection of polished solutions that are not only technically proficient, but more importantly will work the first time every time! Having all built and maintained large infrastructures we combined our collective knowledge and wrote a complete reference, we cover the entire life cycle from creating your environment to monitoring it long term. I mean it, we left no stone unturned where VMware had no PowerCLI solution we wrote our own. This book includes a PowerShell Module with 79 custom advanced functions! We also considered how you would use our book, and chose to take a task driven approach, this enabled you to just flip to the answer you need. You don’t have to “read” all 780 pages… instead think of this book as a fire extinguisher for your virtual infrastructure.

So that’s great, but who needs a book? With blogs and the community forums you can just find the answer right? Yes, yes you can eventually, but if you do that your shopping at a fishery. In this book we WILL teach you to fish! As a commitment to that end we created a dedicated web site to support our work. If you find a bug, or if we missed something you can post a question in our forum!

Admittedly this is my first published work, but I couldn’t be prouder of it. In my opinion The ONYX, and vSphere SDK chapters alone would be worth the cover price. I look forward to the release, and to your honest opinions of our work!


PowerCLI: Force NetApp Virtual Storage Console (VSC) to use a FQDN

First let me say, I love VCS, it took all of the complexity out of using NetApp storage in a vSphere environment.  I have been tolerating one annoyance for quite some time now, and this morning said annoyance broke VCS at a customer site. What’s wrong with VCS? Well, for some reason it forces you to register the plugin with vCenter using an IP address.  Due to an over-restrictive proxy configuration, which caused only fully qualified domain names(FQDN) worked. Any IP address was redirected to an web page that explained said over-restricted policy, because VCS is mainly a web page the use of an IP address broke everything.  I searched around a little, and found Williams Lams post on removing plug-ins with the MOB. Once I found the pivot point for Plug-ins, I searched the API Reference, and found the ExtensionManager object.   Now that I had the Object in hand, I fired up PowerCLI and in less than 10 min figured out how to manually adjust the URL VSC used. It was so easy that I think I’m going to try and slap together a quick module to manage plug-ins via PowerCLI, but in the meantime if you, like me, have been frustrated by VSCs use of an IP address… try this.

ESXi: Kickstart installs using CD-ROM or USB media

Kickstart and PXE are almost always used in conjunction, however you don’t have to use PXE to kickstart your installs. Fortunately you can provide the location of a kickstart even if you are using bootable media from USB and/or CD-ROM.

The process I’m going to describe here involves putting the kickstart(s) in a place accessible on the CD/USB media. This is particularly useful if you have an isolated network, a network that has limited resources or if you simply want to eliminate any questions during a manual install process. For example, I use kickstart to do the basic network configuration and the like, however there are very few things that can not bet set via the command line, so, you could, if desired, use kickstart as a method of configuration management. Or, you could simply have it do the install exactly the same as if you answered the questions when the default media is booted, but without actually answering the questions for each host.

You can also provide kickstart files from a web or NFS server even if you are using media. This can prove especially beneficial if you have frequently updated kickstarts (or a large number of them) and you don’t want to have to update the media when kickstarts are changed and/or added. VMware’s documentation describes how to provide the location if you are using web (http) or NFS as the method for providing the configuration.

The example I have here was done from a linux host (RHEL5 to be exact). All of this is reproducible in Windows, however I am not providing any documentation on how to do that.

Read moreESXi: Kickstart installs using CD-ROM or USB media

VMworld Labs: Hands-On

First off, Happy VMworld everyone! It’s finally here, so let the socialization, learning and exhaustion begin.

I had a bit of free time this morning so I decided to do a lab. I didn’t arrive early enough to get in on the preview like some others, however even after seeing their posts about it the labs floor is quite impressive. From the seats arranged so that everyone can see the projectors, to the apparent “control center” in the middle, it’s an impressive setup.

I chose lab 20…logging in was painless and provisioning of the virtual machines was extremely quick. Access was also very good, initially anyway…more on that in a bit. Progress through the lab was quick, directions were good, screen shots were accurate and quickly identified the key fields that needed to have data entry done.

Overall, I really enjoyed the lab. In the last couple of years I haven’t done very many of them simply because most of the inability to get in. This year I had zero wait, and there were a few free seats around me. The dual monitor thin clients work well and PCoIP is amazing…

On that not so subtle transition, apparently the section I was in was running off the DC cloud. For the first 75% of the lab, I couldn’t tell…in fact, if the WAN (I assume it was the WAN) hadn’t of had some issues, I would have never known. Even when things started to go south and I could tell that latency was through the roof, the session was still usable, slow, but usable. At one point the client lost connectivity, however it quickly regained and started the session exactly where I left off.

I’m incredibly impressed with the labs…it’s my first real work done on a thin client, over PCoIP using a “cloud” infrastructure. If the provisioning of virtual machines works in the private cloud like it did for me today on VMware’s lab cloud, then we have a lot to look forward to in the future!

Outstanding work as always Labs Team, thank you!

VMware vSphere Hypervisor (ESXi) 4.1 kickstart – A.K.A. official “touchfree” ESXi installs

VMware is a facilitator. I know, you’re thinking “yeah, they facilitate my power/space/cooling savings, they facilitate infrastructure consolidation, IT agility, high availability, etc.”, but really, they facilitate me being lazy (which for sysadmins is a good thing…a lazy admin will only want to do a task once, then automate the sh*t out of it).

I’ve already documented how I hacked the ESXi 4.0 installer to have it do the installation without interaction. However, VMware has one-upped me and integrated kickstart into their installer. This makes things VASTLY easier, requires no tomfoolery with the ISO, and is significantly more capable.

This blog post will be just a short one to demonstrate how easy it is now to have the install be “touch-free”. I am working on some more complex examples in the coming days.

So without further blithering from me, on with the install! Put the CD in the drive (or mount the iso remotely), boot the server. When it reaches the boot options menu:

ESXi 4.1 Boot options screen

Press tab to append options to the boot line. Append the following after the vmkboot.gz, but before the --- after it.


It is VERY IMPORTANT that you place the kickstart file location after vmkboot.gz, but before the next boot module. It should not be at the end.

mboot.c32 vmkboot.gz ks=file://etc/vmware/weasel/ks.cfg --- vmkernel.gz ---sys.vgz ---cim.vgz --- ienviron.vgz --- install.vgz

Here is an example:

ESXi 4.1 Boot Options with KS

When you’re done, press enter. It will begin to load the data off the CD, and when the different install modules are done it should simply begin to install ESXi just like how I had hacked it together previously…

ESXi 4.1 KS Install

The only thing left will be to press “Enter” when it’s done (why?!).

A word of caution…the kickstart that VMware has provided will automatically select and format the first disk that it finds, regardless of it being local or “remote” (i.e. a SAN LUN). I would assume that the vast majority of the time it’s going to find the local disks first, however……..

Hopefully in the next few days I’ll have some more time to play with the new kickstart features and post some more examples. VMware has really done some great things with this process and it is now possible to have the entire process be automated…
1) use DHCP to provide the “permanent” IP
2) use a network PXE boot for the media and to provide a KS file
3) use the --post section of the kickstart file to have the server reach out and touch a vCLI or PowerCLI configuration host and provide permanent configuration.

The reason that step one should provide the actual IP is that it provides an easy way of having your configuration host (vCLI or PowerCLI) know what IP (and potentially hostname) to assign to the host.

Good luck, and thanks to VMware for (finally) integrating kickstart with ESXi!

vSphere: Console… we don’t need no stink’in console

I won’t attempt to provide a feature rundown or tell you why vSphere 4.1 is the greatest thing since sliced bread.  It appears to be a solid release, but  I’ll leave that analysis to the experts…Instead I want to talk about the vSphere hypervisor (previously ESXi).

Why the name change? Simple what was previously mis-branded as a separate technology is really the hypervisors core.  Previously in ESX3.5, ESXi was a separate technology, but as of vSphere 4 they have had a unified core.   In-fact the product we like to think about as vSphere 4.0/4.1 is really just a vSphere hypervisor with a special management VM!  This is important, the only difference is the console which is nothing more than a VM!

So why the distinction, Why now?  VMware is playing it’s hand this round because that special VM is going bye, bye.  The Next release of vSphere will not have a service console.. PAINIC…. RUN IN CIRCLES THE ZOMBIES ARE COMMING!!!

Don’t Panic, Personally I applaud the move.  Over the past year and a half I’ve heard every argument against the console less hypervisor, but honestly I chalk it all up to people fear change.  There are a couple thousand admins who have invested a lot of time mastering vSphere, and VMware is about to change the whole game on them.  These guys/gals bring up several arguments against the console less hypervisor, I’ll attempt to offer my counter argument to these points.

Q. No 3rd Party agents.

A. It has been public knowledge that the console was going away, and as of vSphere 4.0 VMware shipped a new management appliance vMA.  One of the intended uses of this appliance was to install 3rd party agents.  So you see we do still have 3rd Party agents they just need to be rewritten.  In most cases this will result in a better product. Unfortunately, the vast majority of 3rd party software, could better be described as a really complex perl script running over ssh!

Q. Hardware monitors/plug-in

A. Part of the original ESXi 3.5 release was the introduction of a rudimentary CIM provider.  This provider has been fully expanded , and made extensible.  While it is a change from the traditional agent based monitors CIM does fill in this gap.

Q. Automating common tasks.

A. As of vSphere 4.1 Tech Support Mode supports SSH, but you should really be using either PowerCLI or the vCLI!  While it is true that are still a couple of things that can only be done via the console.  I’m confident VMware will fix those gaps before putting the console out to pasture.

Q. Security

A. So this is the big one, and my personal pet peeve.  I’ve heard security experts bash the vSphere hypervisor claiming it was insecure.  I just don’t understand this stance, admittedly I’m no security expert.  I only work with the federal government in some of the most secure data centers in the world, but what do I know…

Let’s break this down shall we… The only difference is a VM.   Admittedly this VM has special connections into the vmkernal, but it’s still just a VM.  How exactly does the inclusion of a VM make the hypervisor more secure?  In my opinion the exclusion of this VM instantly increased the security posture of most organizations.  The reason for this simple, it was hard to properly harden the console.  Alternatively it was all too easy to open a critical security hole, and expose ones infrastructure with the console.

Yes you still have to do several things to really lock down the console less hypervisor, but it’s not nearly the feat the console once was.  In fact it’s simple;

1. Modify the Proxy.xml (turning off all unneeded web services, and make everything use https).
2. Enable Lockdown mode.
3. Physical security.

That’s it folks, that’s all it takes to secure the hypervisor.  There are a couple hundred other little things necessary to design a secure infrastructure, but as you can see the hypervisor is easy!  In fact I’m so confident in this I’m willing to hold a Bobby Flay style throw down.  If you have the means to provide a  pair of internet facing vSphere hosts. I’ll secure the console less hypervisor, we’ll get TexiWill to harden the legacy console based hypervisor, and then we’ll release the IP’s to the world.  Have at it, folks I bet the console less hypervisor holds up at least as long as the legacy hypervisor!

Why so brash? Well it will take an exploit to get in to the console less hypervisor, and any exploit will also be present in the legacy hypervisor.  The console less vSphere hypervisor without access to the physical host or vCenter there is simply no other way in.   Remember this isn’t Linux or BSD or UNIX… it’s vSphere it’s practicality firmware, and the whole point was to remove all that crap that weaken the security , and stability to begin with!

I really want to put this to bed!  Let’s develop the to do list for VMware.  The 10-20 things they need to fix before they can finally kill the console.  Then let’s collectively shut up about it.  It’s going to happen, and complaining with arbitrary little gripes… or demanding NDA meetings with engineers isn’t going to stop any of it.  The Task at hand is simple, weed out the crap, and focus on what needs to be fixed in vSphere v.Next.

If we missed something let us know in the comments.

PowerCLI: Reconnect VMhosts after changing vCenter certificates

If you have ever changed the vCenter server certificates, you’ve experienced having all your hosts disconnected from vCenter.  I couldn’t imagine reconnecting them one at a time… You could do this all natively in PowerCLI, but that would require you to fully remove and then add the hosts.  That is very inconvenient, and almost as much trouble as doing it by hand… In this case it is both faster and easier to just use the native vSphere API.