VxRack SDDC Overview (Part 3)

Creating a Virtual Infrastructure Workload Domain

Workload domains are logical units that are used to carve up the VxRack’s hardware resources. There are two types of workload domains that are pre-packaged in the environment  :

  • Virtual Infrastructure workload domain. This consists of  a set of esxi hosts /  a vCSA / an NSX Manager and 3 NSX controllers. 3 hosts is the minimum configuration and 64 is the maximum. (vSphere maximums)
  • VDI workload domain (addressed in a future post)

The bring up of  these workload domains is an automated procedure that calls a particular set of workflows based on the options you select. The software automatically calculates the amount of hosts needed to satisfy the capacity requirements you specify using Resources (CPU / MEM / Storage) , performance and availability

To create a VI workload domain, just select Add Workload Domain via SDDC manager and follow the wizard

workload

Once you get to the workload configuration page, you can select the resources / performance and availability you require for the workload domain.

workload2

Each performance option determines capacity related settings in the workload domains vSAN storage policy

Each availability option determines the number of drive failures that the workload domains environment can tolerate

Note : Select the “Use all default networks” unless you want to manually set values for the vlan id / subnet / gateway etc

At the end of the wizard you can review the configuration and click finish, or you can go back and change some of the configuration

workload3

While the workflows are running, its possible to monitor the progress using the status window

workload4

Note : The new workload domain will be part of the same SSO domain as the management domain and will share the PSC’s deployed in the management domain

Once the workload domain has been created , we can see in the following screenshots that the workflows will  add the new vCSA to vROPS and Log insight that are located in the Management Domain

Log Insight 

workload7

vROPS

workload5

As the mgmt domain and workload domains share the same set of PSC’s, the vCSA’s are configured to use  enhanced link mode and its possible to see both vCSA’s when you log into either virtual center. Both vCSA’s are accessed using the same credentials.
Note : The workload domain VC and NSX manager VM’s live in the management cluster. 

vc2

If we expand the workload domain virtual center, we can see the newly deployed number of hosts needed to satisfy the performance options selected previously, we can also see the NSX controllers for the workload domain.

vc

Once the workload domain has been successfully created and you have verified the configuration then you are ready to provision VM’s.

In the next post we will look at creating a VDI workload domain…

 

 

VxRack SDDC Overview (Part 2)

Bringing up the System 

There are a few acronyms that we need to be familiar with before we begin to build out the system.

VIA : The VMware Imaging Appliance is used for imaging the physical racks. It images the physical switches, ESXi hosts , and deploys the VM’s needed to complete the system build out incl SDDC Manager. The VIA is installed via an OVA template and it runs services like DHCP and PXE to discover and identify devices and perform the imaging.

SDDC Manager : SDDC manager is responsible for provisioning and managing the logical and physical resources . Once the rack is imaged by VIA then SDDC manager completes the build out procedure .

Imaging Rack

Once you have the VIA deployed then you need to upload the vCF software bundle ISO to the VM and activate it

This ISO contains all the software and scripts needed to image the rack , it includes the following components :

  • vSAN
  • vSphere
  • NSX
  • vRealize Log Insight
  • SDDC Manager
  • vRealise Operations
  • PSC
  • VMware Horizon View

1

Once the bundle is activated then you provide a name for the imaging run as well as selecting the deployment type of either full or individual component

3

Note : Make sure to select “Add-on Rack” if imaging an additional rack

Once imaging has completed successfully then you will see the following screen

4

The imaging procedure configures vSAN on the first Host and deploys the VM’s to this host and datastore. It also copies the software bundle up to the vSAN datastore and connects the ISO to the SDDC manager VM so this VM can continue the build out process.

Note : Once imaging has completed, you need to get the root password that has been assigned to the SDDC manager VM. To do this type the following into a browser

192.168.100.2:8080/via/ipsecThumbprint/runid

Cloud Foundation Bring up Process

At this stage you are ready to log into the SDDC Manager VM to continue the process

https://192.168.100.40:8443/vrm-ui

5

After setting the system time , the system will perform a power on self validation where all the physical components in the rack will be verified as being operational.

You can then step through the rest of the wizard and fill in the necessary details. There are seperate pages for General config / mgmt / vMotion / vSAN / VXLAN / and the data centre connections.

Make note of the component IP allocations as these are required later in the bring up process

6

7

Note : All components are configured to use the SDDC manager VM for DNS and NTP services. SDDC Manager uses a piece of software called unbound for name resolution and uses a forwarder address for external name resolution.

Once the procedure is completed then you will be prompted to perform password rotation which will change the passwords on all components

9

To view all the component passwords , you can ssh to the SDDC manager VM and type the following commands

cd /home/vrack/bin/

./vrm-cli.sh lookup-password

At this stage the system should be built and you can log into Virtual Centre

All Screenshots taken from VMware Cloud Foundation Overview and Bring-Up Guide

Further Reading

http://pubs.vmware.com/sddc-mgr-12/index.jsp#com.vmware.evosddc.via.doc_211/GUID-71BE2329-4B96-4B18-9FF4-1BC458446DB2.html

VxRack SDDC Overview (Part 1)

I have been doing a lot of investigative work recently with VxRack SDDC and i would like to give you an overview of the platform architecture  and how it all fits together.

VxRack SDDC is a hyper-converged platform from Dell-EMC that allows scalability and flexibility and is aimed at the large and extra large customers with large numbers of VMs and users. The solution is co-engineered by Dell EMC and VMware  and is made up of pre-loaded software with compute, storage and networking components.

The solution is powered by vCF (VMware cloud foundation) and provides an easy path to building out a VMware software-defined data center.

VMware Cloud Foundation includes : 

VMware vSphere

VMware NSX

VMware vSAN

VMware SDDC Manager

At the center of the solution is SDDC Manager and lifecycle management (LCM). These components enable the solution to be a fully automated deployment, including configuration and patching & upgrades of components

Current Minimum Config

8 nodes

Current Maximum Config

24 nodes in each rack (either all flash or all hybrid)

8 cabinets max (total of 192 nodes)

Networking 

If you are just purchasing one rack then there are two Cisco 9372’s which function as TOR switches and a Dell S3048 Cumulus switch that is the management switch. The management switch is used for OOB connections and the HMS (hardware management service) is also installed on the management switch.

For multiple racks, you must have Cisco 9332 spine switches in the second rack

Management

The first four servers on each physical rack are used for the management domain. The SDDC manager creates a single vSAN volume spanning all hosts in the cluster.

mgt.png

Screenshot taken from VMware Cloud Foundation Overview and Bring-Up Guide

The management domain contains the following VM’s

  • SDDC Manager
  • NSX Manager
  • vRealize Operations
  • vRealize Log Insight
  • vCenter Appliance
  • Two external PSC’s

If you have multiple racks then there is a management domain incl an SDDC manager on each rack. The SDDC manager on the first rack is the primary node and the others are secondary nodes. If the primary SDDC manager node fails then one of the secondary nodes will take over.

All racks are part of the same SSO domain and utilize the PSC’s that are located on the first rack, PSC’s are not deployed on the second and subsequent racks.

vRealize Operations is deployed on each rack but vRealize Log Insight is only deployed on the first rack.

Next we will take a look at at the VIA (VMware imaging appliance) and the system bring up procedure ….

Further Reading on VxRack SDDC

https://www.emc.com/en-us/converged-infrastructure/vxrack-system-1000/index.htm

http://pubs.vmware.com/sddc-mgr-12/index.jsp#com.vmware.evosddc.via.doc_211/GUID-71BE2329-4B96-4B18-9FF4-1BC458446DB2.html

 

 

 

EHC available on VxRail

Support for EHC on VxRail was introduced as part of EHC release 4.1.1. This marks a watershed moment in EHC’s evolution as it is now available to customers of all sizes and also features an automated installation (automated installation only available on VxRail for now).

Supported services in this release include the following :

  • BaaS (backup as a service)
  • Encryption as a service
  • Single site protection

There is a four host minimum configuration and the traditional EHC Pod’s (Core, NEI and automation) are created as resource pools in the VxRail cluster. The reason for requiring four hosts instead of three is to ensure that you can suffer the loss of a storage node without affecting storage integrity.

EHC on VxRail allows you to start small and grow your hybrid cloud environment as your needs increase. It is simple, scalable and efficient.

The diagram below depicts the EHC architecture on a VxRail appliance.

Capture

If you want to see how hybrid cloud can be of benefit to you , there is Further documentation available here :

https://www.emc.com/en-us/solutions/cloud/enterprise-hybrid-cloud.htm

https://community.emc.com/community/connect/everything-cloud

https://www.emc.com/collateral/technical-documentation/h15928r-ehc-4-1-1-concepts-architecture-sg.pdf

https://www.emc.com/collateral/technical-documentation/h15927r-ehc-4-1-1-reference-architecture-ra.pdf