VxRack SDDC workload domain deletion failed

Came across an issue recently where i was trying to delete a workload domain and the procedure failed.

error1

error2

After performing some troubleshooting and log analysis i found that the HMS process was not running on the Management switch

error4

To try and resolve the issue I followed the following procedure to restart the HMS process on the Dell Cumulus management switch and also restarted the vrm-tcserver service on the SDDC manager VM:

In order to get the credentials to log into any of the components, run the following command from the vrm VM.  This command will display all the component usernames and passwords

./vrm-cli.sh lookup-password

To restart the HMS process, ssh into the management switch using the credentials provided

error3

Then run the following command

for PID in $(ps -ef |grep HmsApp |grep -v grep|awk ‘{print $2}’)

do

kill $PID

done

service starthms.sh 

Once this is done, ssh to the vrm VM and restart the vrm-tcserver service

service vrm-watchdogserver stop 

service vrm-tcserver restart 

service vrm-watchdogserver start

Once this was complete then i attempted to delete the workload domain again and it was successful

error5

VxRack SDDC Overview (Part 3)

Creating a Virtual Infrastructure Workload Domain

Workload domains are logical units that are used to carve up the VxRack’s hardware resources. There are two types of workload domains that are pre-packaged in the environment  :

  • Virtual Infrastructure workload domain. This consists of  a set of esxi hosts /  a vCSA / an NSX Manager and 3 NSX controllers. 3 hosts is the minimum configuration and 64 is the maximum. (vSphere maximums)
  • VDI workload domain (addressed in a future post)

The bring up of  these workload domains is an automated procedure that calls a particular set of workflows based on the options you select. The software automatically calculates the amount of hosts needed to satisfy the capacity requirements you specify using Resources (CPU / MEM / Storage) , performance and availability

To create a VI workload domain, just select Add Workload Domain via SDDC manager and follow the wizard

workload

Once you get to the workload configuration page, you can select the resources / performance and availability you require for the workload domain.

workload2

Each performance option determines capacity related settings in the workload domains vSAN storage policy

Each availability option determines the number of drive failures that the workload domains environment can tolerate

Note : Select the “Use all default networks” unless you want to manually set values for the vlan id / subnet / gateway etc

At the end of the wizard you can review the configuration and click finish, or you can go back and change some of the configuration

workload3

While the workflows are running, its possible to monitor the progress using the status window

workload4

Note : The new workload domain will be part of the same SSO domain as the management domain and will share the PSC’s deployed in the management domain

Once the workload domain has been created , we can see in the following screenshots that the workflows will  add the new vCSA to vROPS and Log insight that are located in the Management Domain

Log Insight 

workload7

vROPS

workload5

As the mgmt domain and workload domains share the same set of PSC’s, the vCSA’s are configured to use  enhanced link mode and its possible to see both vCSA’s when you log into either virtual center. Both vCSA’s are accessed using the same credentials.
Note : The workload domain VC and NSX manager VM’s live in the management cluster. 

vc2

If we expand the workload domain virtual center, we can see the newly deployed number of hosts needed to satisfy the performance options selected previously, we can also see the NSX controllers for the workload domain.

vc

Once the workload domain has been successfully created and you have verified the configuration then you are ready to provision VM’s.

In the next post we will look at creating a VDI workload domain…