Running a Command During ESXi Boot to Start a VM.
One way to automatically start VMs on a host after a reboot is to use the VM Startup/Shutdown feature. This feature works well on standalone ESXi hosts and will ensure VMs start automatically in a configured order after a host reboot. The image below shows how this is configured on an ESXi host managed by vCenter using the Web Client.
From the vSphere Client managing a host directly it can be found on the Configuration Tab, Virtual Machine Startup/Shutdown.
An important thing to note here is, if the host is part of a vSphere HA cluster, the automatic startup and shutdown of virtual machines is disabled. Even if HA is not enabled and you vMotion a VM from one host to another, and then back, it sets the machine back to manual startup.
What if the VM is not protected by HA (on local storage) or if for some reason HA does not work (for example all hosts in a cluster fail). What if… What if… What if…
Now not to get into a design discussion about mitigating the risks of all host failing due to a power outage or an issue with a blade chassis or whatever. It can still happen. I wanted to come up with a fairly bullet proof solution which would start a VM, in this case a Domain Controller running in my home lab, when ESXi boots.
This article from @lamw on Executing Commands During Boot Up in ESXi 5.1 pointed me to one potential solution.
The vim-cmd can be used to find and power on a VM from the ESXi command line. The following commands will find the vmid of a virtual machine in host inventory and power it on.
vim-cmd vmsvc/getallvms | grep "INVENTORYNAMEOFVM" | cut -d " " -f 1 | xargs vim-cmd vmsvc/power.on
Replace INVENTORYNAMEOFVM with the Inventory Name of the virtual machine you want to start, in this case LABDC1. To run the command as part of the ESXi boot, simply add the line to the /etc/rc.local.d/local.sh before the “exit 0” line.
The local.sh will look something like this:
#!/bin/sh # local configuration options # Note: modify at your own risk! If you do/use anything in this # script that is not part of a stable API (relying on files to be in # specific places, specific tools, specific output, etc) there is a # possibility you will end up with a broken system after patching or # upgrading. Changes are not supported unless under direction of # VMware support. vim-cmd vmsvc/getallvms | grep "LABDC1" | cut -d " " -f 1 | xargs vim-cmd vmsvc/power.on exit 0
See VMware KB 2043564 or @lamw‘s Executing Commands During Boot Up in ESXi 5.1 article for more details.
Since the host are in a DRS cluster and the VM could be located on any host in the cluster, I would just need to add the line to the local.sh on each host where the VM could run. If the command executes and the VM is not found or is already powered on it simply errors and all is well.
If I wanted to add other VM’s I would simply add another line to the local.sh for each (Probably want to add a sleep between them) for example:
#!/bin/sh # local configuration options # Note: modify at your own risk! If you do/use anything in this # script that is not part of a stable API (relying on files to be in # specific places, specific tools, specific output, etc) there is a # possibility you will end up with a broken system after patching or # upgrading. Changes are not supported unless under direction of # VMware support. vim-cmd vmsvc/getallvms | grep "LABDC1" | cut -d " " -f 1 | xargs vim-cmd vmsvc/power.on sleep 120 vim-cmd vmsvc/getallvms | grep "LABFILE01" | cut -d " " -f 1 | xargs vim-cmd vmsvc/power.on exit 0
Not the most elegant (or manageable) solution for sure, but one that should work to make sure a critical VM is powered on automatically as the hosts are brought back up.
Comments and thoughts are always welcome.
Have a great day!
Interesting post. However I’m pretty sure that if ha cluster is configured then all vm’s will be started by HA automatically. So why to use this hack?
David,
Thanks for stopping by and for the comment. If the cluster is configured for HA then you are correct, all VMs will be started by HA automatically if there is/are surviving hosts in the HA cluster. What if the entire cluster goes down (extended power outage or some other failure)?
It is really more useful if you have a VM on a host that runs on local storage, these VMs are not/cannot be protected by HA.
Not really a “hack” since using the local.sh to run commands at boot is supported (even if it is not really recommended).
Thanks again for stopping by.
Hersey
Let me say it again in other words …
If the entire HA cluster goes down – for example because of power outage – all HA protected VMs will be started automatically when power is back. HA cluster knows that these VMs should be in Powered On state and because they are not in such state the HA cluster will start all HA protected VMs. I know that because in my work lab there is periodically scheduled power shutdown and all VMs in HA cluster are started automatically when power is back. The same is true in my home lab. My home lab HA cluster is even designed for power failures. As I use at home lab only distributed virtual switch (aka DVS) my vCenter VM must be in ephemeral portgroup to successfully start vCenter after power failure. In addition vCenter is mandatory for other VMs to be connected to network backed by DVS therefore vCenter VM must have the highest restart priority and all other VMs must have lower priority.
Now, when you have standalone ESXi you can use VM Startup/Shutdown feature as you already mentioned in your post.
That’s why I don’t see any value in modification of /etc/rc.local.d/local.sh
I call it hack because it is advanced configuration which is IMHO not necessary.
But your mileage my vary 😉
Thanks for this job,
Is there a way to run a script during hostshutdown ? in esxi 6.7.
I really need to force my esx to apply 120s delay when it receive host shutdown by web ui.
please i’m not asking for cli command which shutdown the host… I just need to put an hardcoded delay each time the esx go shutdown.
Thanks, Appreciate