Cloud Foundation 50 Concepts - Part IV
And finally, this is the latest part of the VCF 50 Concepts series, but not the end! Because way more is coming.
If you missed any of the first 3 Posts, it is better to first look at Parts I, II, and III.
Let's get started!
Token ID: Each SDDC Manager Workflow is based on several tasks and subtasks. Every Task has its Token ID. When it comes to troubleshooting, it is critical to identify the Token ID of the failed subtask because it will make it easier to find the source of the problem on the proper log files.
You can find the Token ID in the Tasks view in the UI of the SDDC Manager, expanding the workflow task and identifying the failed subtask.
opID: This is the Operation ID for an entire Task. You can find the opID associated with one particular Task in the Logs, and the Token ID (Subtask) will have an associated opID for the main Task.
SOS Tool: The SOS Tool is a group of command line tools included in the SDDC Manager. You need to ssh to the SDDC Manager virtual appliance using the vcf user. Once you're connected, you enter the following command: sudo /opt/vmware/sddc-support/sos < - - option-name>
VCF Support Bundle: Thanks to the SDDC Manager SOS Tool (via SSH), it is possible to create a súper bundle Log that includes all the critical Logs you want to upload to your VMware Support ticket.
When using the SOS Tool, you have to select a Workload Domain and components like vCenter, ESXi, NSX Manager or more to include into a single file. This is an extremely useful feature to make troubleshooting even simpler.
API Explorer: Like any modern application, SDDC Manager has their API, which makes VCF even more efficient, being able to automate several tasks. Depending on the VCF version, some tasks must be run only via API.
The API Explorer is an embedded API Client in the SDDC Manager UI, and it is very useful because it's extremely easy to use; apart from that, it includes a series of examples and JSON validation that make our lives easier.
OSA: Stands for Original Storage Architecture, and this is the historical vSAN Architecture. OSA is available in all VCF versions and the easiest way to recognize that vSAN Infrastructure is because it's based on Disk Groups or 2-tier Storage composed of Cache and Capacity.
ESA: Stands for Express Storage Architecture, and starting on vSphere 8 and VCF v5.1, it is the new and modern vSAN Storage Architecture based only on NvME Disks using only 1 Tier named Storage Pool.
vSAN ESA comes full of improvements and plenty of efficiencies.
HCI Mesh: Allows sharing a vSAN Datastore to be mounted as a Remote vSAN Datastore in the "Client" Cluster. An HCI Mesh can be configured as a Principal Storage on a VI Workload Domain using Compute-Only Hosts.
HCI Mesh, in VCF, is not supported on Stretched Clusters, but the Remote vSAN Clusters can be configured as a Secondary Storage in any Cluster from any Workload Domain.
AZ: Availability Zone is the name received in VCF to the equivalent of Data Site in vSAN Stretched Cluster. It defines a physical site with at least 4 ESXi hosts that will work with a Domain in Stretched Mode (with vSAN Stretched enabled). VCF requires a minimum of 4 Hosts per Data Site, and before you Stretch any VI Workload Domain, the Management Domain must be Stretched first.
VCF Stretched Cluster - vLans detailed
Witness Site: Apart from the AZs, in a VCF-stretched scenario, it is required to have a Witness Site independent from any Availability Zone. In the Witness Site, the Witness Virtual Appliance will be deployed.
The virtual machine of the Witness Appliance must work on top of an ESXi Host, so it is not supported using any platform like AWS EC2 or Azure VM instances.
Witness Host: This Virtual Appliance works as a quorum or tie-breaker on Stretch architectures. The requirements for the Witness Host in VCF are the same ones as in vSAN, outside the VCF. If you deploy an ESXi Host in a third site to host the Witness, that ESXi will be associated with the Management Domain, not the vSAN Stretched Cluster.
Remote Clusters: It is possible to deploy a VI Workload Domain with vSphere Clusters backed from 3 up to 16 ESXi Hosts that will be located on a Remote Site. The requirements for connectivity are 10Mbps bandwidth and 100 ms of Latency, and one VI Workload Domain will have Local or Remote Clusters but not both. The vCenter instance will work in the Management Domain, as usual.
Image provided by VMware by Broadcom
Multiple or Diverse SSO Domains: Since VCF 5.0, it is now possible to create a new and independent SSO Domain when we create a VI Workload Domain. In older VCF versions, we worked, by default, with only one SSO Domain for all the Workload Domains in VCF (Management and VIs). This feature is especially useful in Cloud Director scenarios because it helps us to isolate the environments working with a more real multi-tenant mode.
Dirty Host: After we remove a Host from a vSphere Cluster, whatever the reason, that Host will appear in the SDDC Manager as an unassociated Host, but it will not be available to be used in the creation of a new Cluster or to expand an existing Cluster. The Host will show a "Cleaning needed" message in the inventory, and the best way to solve that is first to decommission the Host, redeploy the ESXi hypervisor and commission the Host again.
Cloud Inventory: When working with multiple VCF instances, in the VMware Cloud Console, you can see in the Cloud Inventory the details of connected VCF instances and their resources like the number of Clusters, Hosts, Cores, VMs plus CPU, Memory and Storage resources.
The Cloud Inventory replaced the old and depreciated VCF Multi-Site Federation for multi-instance management.
Cloud Foundation Deployment Specialist: This is not a concept; it's an exam, but I highly recommend you prepare yourself for this one and go for it. It is challenging but not rocket science and will open plenty of doors full of opportunities. Attendance of the official course is not mandatory, but you have to hold a current version of the VCP-DCV certification.
And that's it! We're done with the 50 VCF Concepts but stay tuned because, in the next post, we will cover some VCF Design recommendations and Tips for the Deployment.