Issue Re-Commissioning a host in vCF

I wanted to decommission an esxi server from VMware Cloud Foundation and recommission it to test the procedure but i ran into an issue . Here are the steps that  i followed :

  1. Decommissioned  the host from SDDC manager. Once decommissioned then the esxi password defaults back to EvoSddc!2016


2. Re-imaged the host using the VIA and selecting the device type as “ESXI_Server”


3. Once the host was installed then i assigned it an IP of (has to be in the range of –

4. . Checked that SSH was enabled and the firewall rules were set correctly (connections restricted to the subnet)

5. SSH’d to the VRM (SDDC Manager) VM and edited the following file to reflect the BMC username and password


6. Then attempted to run the recommission script

sudo /home/vrack/bin/

I could see that the host was being picked up correctly but the recommission procedure was failing :


Then i looked in the vrack-vrm.log which is located in /home/vrack/vrm/logs and i was able to get more information as to what was causing the commissioning to fail. It seemed that the host was trying to mount an nfs datastore and the procedure was failing


Then i logged into the host using  the vSphere Client and could see that the host was trying to mount an NFS datastore from the LCM repository VM


I then examined this VM and could see that it didnt have any IP address and the nics were disconnected


I connected the NICS , rebooted the VM and it then showed valid IP addresses. I reran the host commission procedure and this time it succeeded


Once this had succeeded then i was able to see the host in the SDDC manager physical inventory and continue with the remaining steps to commission the host (Step 8 onwards)


vExpert 2017


I am delighted to be added to the vExpert team for 2017. The vExpert community is a great forum with some great benefits some of which include :

vExpert Program Benefits

  • Invite to our private #Slack channel
  • vExpert certificate signed by our CEO Pat Gelsinger.
  • Private forums on
  • Permission to use the vExpert logo on cards, website, etc for one year
  • Access to a private directory for networking, etc.
  • Exclusive gifts from various VMware partners.
  • Private webinars with VMware partners as well as NFR’s.
  • Access to private betas (subject to admission by beta teams).
  • 365-day eval licenses for most products for home lab / cloud providers.
  • Private pre-launch briefings via our blogger briefing pre-VMworld (subject to admission by product teams)
  • Blogger early access program for vSphere and some other products.
  • Opportunity to receive a free blogger pass to VMworld US or VMworld Europe (limited to 50 for US and 35 for EU).
  • Featured in a public vExpert online directory.
  • Access to vetted VMware & Virtualization content for your social channels.
  • Yearly vExpert parties at both VMworld US and VMworld Europe events.
  • Identification as a vExpert at both VMworld US and VMworld EU

Here is a full list of the 2017 H2 vExperts :



VxRack SDDC – NSX Automation

nsxYou may have been wondering how much of the NSX installation and configuration process is handled by the automation workflows present in SDDC Manager.  Below is a bullet-ed list of the steps currently undertaken

  • Physical switch setup to support NSX (MTU / dedicated VLAN etc)
  • Deploy NSX Manager
  • Deploy NSX Contollers
  • Create IP Pools
  • Create a Transport Zone
  • Create a Logical Switch
  • Integrate with vROPS and Log Insight

These steps are performed for both the management domain and any workload domain(s) that are created. Having this work automated through an automated workflow saves a lot of setup time and is another reason why VxRack SDDC is a fantastic platform that will enable your IT department to bring up environments quicker and easier.

Once the above is completed then you are free to progress with further customization as required. You can deploy Edge Service Gateways , Logical Routers , Load balancers , distributed firewall rules etc and really start to use the powerful features of VMware NSX


VxRack SDDC workload domain deletion failed

Came across an issue recently where i was trying to delete a workload domain and the procedure failed.



After performing some troubleshooting and log analysis i found that the HMS process was not running on the Management switch


To try and resolve the issue I followed the following procedure to restart the HMS process on the Dell Cumulus management switch and also restarted the vrm-tcserver service on the SDDC manager VM:

In order to get the credentials to log into any of the components, run the following command from the vrm VM.  This command will display all the component usernames and passwords

./ lookup-password

To restart the HMS process, ssh into the management switch using the credentials provided


Then run the following command

for PID in $(ps -ef |grep HmsApp |grep -v grep|awk ‘{print $2}’)


kill $PID



Once this is done, ssh to the vrm VM and restart the vrm-tcserver service

service vrm-watchdogserver stop 

service vrm-tcserver restart 

service vrm-watchdogserver start

Once this was complete then i attempted to delete the workload domain again and it was successful


VxRack SDDC Overview (Part 3)

Creating a Virtual Infrastructure Workload Domain

Workload domains are logical units that are used to carve up the VxRack’s hardware resources. There are two types of workload domains that are pre-packaged in the environment  :

  • Virtual Infrastructure workload domain. This consists of  a set of esxi hosts /  a vCSA / an NSX Manager and 3 NSX controllers. 3 hosts is the minimum configuration and 64 is the maximum. (vSphere maximums)
  • VDI workload domain (addressed in a future post)

The bring up of  these workload domains is an automated procedure that calls a particular set of workflows based on the options you select. The software automatically calculates the amount of hosts needed to satisfy the capacity requirements you specify using Resources (CPU / MEM / Storage) , performance and availability

To create a VI workload domain, just select Add Workload Domain via SDDC manager and follow the wizard


Once you get to the workload configuration page, you can select the resources / performance and availability you require for the workload domain.


Each performance option determines capacity related settings in the workload domains vSAN storage policy

Each availability option determines the number of drive failures that the workload domains environment can tolerate

Note : Select the “Use all default networks” unless you want to manually set values for the vlan id / subnet / gateway etc

At the end of the wizard you can review the configuration and click finish, or you can go back and change some of the configuration


While the workflows are running, its possible to monitor the progress using the status window


Note : The new workload domain will be part of the same SSO domain as the management domain and will share the PSC’s deployed in the management domain

Once the workload domain has been created , we can see in the following screenshots that the workflows will  add the new vCSA to vROPS and Log insight that are located in the Management Domain

Log Insight 




As the mgmt domain and workload domains share the same set of PSC’s, the vCSA’s are configured to use  enhanced link mode and its possible to see both vCSA’s when you log into either virtual center. Both vCSA’s are accessed using the same credentials.
Note : The workload domain VC and NSX manager VM’s live in the management cluster. 


If we expand the workload domain virtual center, we can see the newly deployed number of hosts needed to satisfy the performance options selected previously, we can also see the NSX controllers for the workload domain.


Once the workload domain has been successfully created and you have verified the configuration then you are ready to provision VM’s.

In the next post we will look at creating a VDI workload domain…



VxRack SDDC Overview (Part 2)

Bringing up the System 

There are a few acronyms that we need to be familiar with before we begin to build out the system.

VIA : The VMware Imaging Appliance is used for imaging the physical racks. It images the physical switches, ESXi hosts , and deploys the VM’s needed to complete the system build out incl SDDC Manager. The VIA is installed via an OVA template and it runs services like DHCP and PXE to discover and identify devices and perform the imaging.

SDDC Manager : SDDC manager is responsible for provisioning and managing the logical and physical resources . Once the rack is imaged by VIA then SDDC manager completes the build out procedure .

Imaging Rack

Once you have the VIA deployed then you need to upload the vCF software bundle ISO to the VM and activate it

This ISO contains all the software and scripts needed to image the rack , it includes the following components :

  • vSAN
  • vSphere
  • NSX
  • vRealize Log Insight
  • SDDC Manager
  • vRealise Operations
  • PSC
  • VMware Horizon View


Once the bundle is activated then you provide a name for the imaging run as well as selecting the deployment type of either full or individual component


Note : Make sure to select “Add-on Rack” if imaging an additional rack

Once imaging has completed successfully then you will see the following screen


The imaging procedure configures vSAN on the first Host and deploys the VM’s to this host and datastore. It also copies the software bundle up to the vSAN datastore and connects the ISO to the SDDC manager VM so this VM can continue the build out process.

Note : Once imaging has completed, you need to get the root password that has been assigned to the SDDC manager VM. To do this type the following into a browser

Cloud Foundation Bring up Process

At this stage you are ready to log into the SDDC Manager VM to continue the process


After setting the system time , the system will perform a power on self validation where all the physical components in the rack will be verified as being operational.

You can then step through the rest of the wizard and fill in the necessary details. There are seperate pages for General config / mgmt / vMotion / vSAN / VXLAN / and the data centre connections.

Make note of the component IP allocations as these are required later in the bring up process



Note : All components are configured to use the SDDC manager VM for DNS and NTP services. SDDC Manager uses a piece of software called unbound for name resolution and uses a forwarder address for external name resolution.

Once the procedure is completed then you will be prompted to perform password rotation which will change the passwords on all components


To view all the component passwords , you can ssh to the SDDC manager VM and type the following commands

cd /home/vrack/bin/

./ lookup-password

At this stage the system should be built and you can log into Virtual Centre

All Screenshots taken from VMware Cloud Foundation Overview and Bring-Up Guide

Further Reading

VxRack SDDC Overview (Part 1)

I have been doing a lot of investigative work recently with VxRack SDDC and i would like to give you an overview of the platform architecture  and how it all fits together.

VxRack SDDC is a hyper-converged platform from Dell-EMC that allows scalability and flexibility and is aimed at the large and extra large customers with large numbers of VMs and users. The solution is co-engineered by Dell EMC and VMware  and is made up of pre-loaded software with compute, storage and networking components.

The solution is powered by vCF (VMware cloud foundation) and provides an easy path to building out a VMware software-defined data center.

VMware Cloud Foundation includes : 

VMware vSphere

VMware NSX

VMware vSAN

VMware SDDC Manager

At the center of the solution is SDDC Manager and lifecycle management (LCM). These components enable the solution to be a fully automated deployment, including configuration and patching & upgrades of components

Current Minimum Config

8 nodes

Current Maximum Config

24 nodes in each rack (either all flash or all hybrid)

8 cabinets max (total of 192 nodes)


If you are just purchasing one rack then there are two Cisco 9372’s which function as TOR switches and a Dell S3048 Cumulus switch that is the management switch. The management switch is used for OOB connections and the HMS (hardware management service) is also installed on the management switch.

For multiple racks, you must have Cisco 9332 spine switches in the second rack


The first four servers on each physical rack are used for the management domain. The SDDC manager creates a single vSAN volume spanning all hosts in the cluster.


Screenshot taken from VMware Cloud Foundation Overview and Bring-Up Guide

The management domain contains the following VM’s

  • SDDC Manager
  • NSX Manager
  • vRealize Operations
  • vRealize Log Insight
  • vCenter Appliance
  • Two external PSC’s

If you have multiple racks then there is a management domain incl an SDDC manager on each rack. The SDDC manager on the first rack is the primary node and the others are secondary nodes. If the primary SDDC manager node fails then one of the secondary nodes will take over.

All racks are part of the same SSO domain and utilize the PSC’s that are located on the first rack, PSC’s are not deployed on the second and subsequent racks.

vRealize Operations is deployed on each rack but vRealize Log Insight is only deployed on the first rack.

Next we will take a look at at the VIA (VMware imaging appliance) and the system bring up procedure ….

Further Reading on VxRack SDDC




EHC available on VxRail

Support for EHC on VxRail was introduced as part of EHC release 4.1.1. This marks a watershed moment in EHC’s evolution as it is now available to customers of all sizes and also features an automated installation (automated installation only available on VxRail for now).

Supported services in this release include the following :

  • BaaS (backup as a service)
  • Encryption as a service
  • Single site protection

There is a four host minimum configuration and the traditional EHC Pod’s (Core, NEI and automation) are created as resource pools in the VxRail cluster. The reason for requiring four hosts instead of three is to ensure that you can suffer the loss of a storage node without affecting storage integrity.

EHC on VxRail allows you to start small and grow your hybrid cloud environment as your needs increase. It is simple, scalable and efficient.

The diagram below depicts the EHC architecture on a VxRail appliance.


If you want to see how hybrid cloud can be of benefit to you , there is Further documentation available here :

Click to access h15928r-ehc-4-1-1-concepts-architecture-sg.pdf

Click to access h15927r-ehc-4-1-1-reference-architecture-ra.pdf