Dell EMC VMware Cloud Foundation 4.0 for PowerEdge MX7000 Deployment Guide January 2021 Rev.
Notes, cautions, and warnings NOTE: A NOTE indicates important information that helps you make better use of your product. CAUTION: A CAUTION indicates either potential damage to hardware or loss of data and tells you how to avoid the problem. WARNING: A WARNING indicates a potential for property damage, personal injury, or death. © 2019 2020 Dell Inc. or its subsidiaries. All rights reserved. Dell, EMC, and other trademarks are trademarks of Dell Inc. or its subsidiaries.
Contents Chapter 1: Audience and scope...................................................................................................... 6 Chapter 2: Overview......................................................................................................................7 Chapter 3: Pre-deployment requirements......................................................................................9 Management host.........................................................................................
Domain Name System................................................................................................................................................ 29 Network Time Protocol..............................................................................................................................................29 Simple Mail Transfer Protocol mail relay (optional)............................................................................................ 29 Certificate Authority (optional).
Configure ESXi settings—using DCUI................................................................................................................... 49 Configure ESXi settings using web interface....................................................................................................... 50 Chapter 12: Cloud Builder and SDDC deployment.........................................................................52 Deploy Cloud Builder....................................................................
1 Audience and scope This deployment guide includes step-by-step instructions for deployment of VMware Cloud Foundation on Dell EMC PowerEdge MX7000 modular platform. Any deviation from the listed configurations may negatively impact functionality. This deployment guide makes certain assumptions about the prerequisite knowledge of the deployment personnel.
2 Overview Deployment of VMware Cloud Foundation on the PowerEdge MX7000 modular platform provides a hyperconverged infrastructure solution incorporating best-in-class hardware from Dell EMC with core VMware products including vSphere, vSAN, NSX, vRealize Log Insight, and SDDC Manager. Virtualization of compute, storage, and networking is delivered in a single package with VMware Cloud Foundation on PowerEdge MX7000.
Figure 1.
3 Pre-deployment requirements Topics: • • Management host Network services Management host The deployment of VMware Cloud Foundation is executed by a Cloud Builder VM that is deployed using an Open Virtualization Appliance (OVA). The virtual machine must be deployed on an ESXi host or cluster that is not a part of the Cloud Foundation cluster.
NOTE: Misconfiguration or lack of one of these services causes the validation portion of the installation to fail. The information pertaining to the network services are inserted into the Cloud Builder Deployment Parameter Sheet. The parameter sheet is a spreadsheet that contains the details of the deployment and information specific to these prerequisites. Domain Name Service Domain Name Service (DNS) is required to provide both forward and reverse name resolution.
4 Validated components VMware no longer maintains a VMware Compatibility Guide for Cloud Foundation. Since vSAN is an underlying requirement of Cloud Foundation, any hardware specified as a vSAN Ready Node is approved for Cloud Foundation. Topics: • • • Hardware components Software and firmware Software Hardware components The following hardware components were used in the validation of this solution.
Software and firmware NOTE: The VMware Compatibility Guide (VCG) is the system of record for versions of certain types of firmware and drivers which are certified to be compatible with vSphere and vSAN. These include server platform, vSAN disk controllers, and network interface cards. For more information on other components, see https://www.dell.com/support. Software This document is written for Cloud Foundation 4.0 running on VMware ESXi 7.0.
5 Hardware overview This section provides additional information about the hardware platform used in the development of this deployment guide.
Figure 3. PowerEdge MX7000 chassis—front view Back view of the PowerEdge MX7000 chassis The back of the PowerEdge MX7000 chassis provides access to network and storage fabrics, management modules, fans, and power connections.
Figure 5. Logical view of the PowerEdge MX7000 chassis Dell EMC PowerEdge MX740c compute sled Dell EMC PowerEdge MX740c is a two-socket, full-height, single-width compute sled that offers high performance and scalability. It is ideal for dense virtualization environments and can serve as a foundation for collaborative workloads.
Figure 6. Dell EMC PowerEdge MX740c compute sled Dell EMC PowerEdge MX5016s storage sled The PowerEdge MX5016s storage sled delivers scale-out, shared storage within the PowerEdge MX architecture. The PowerEdge MX5016s sled provides customizable 12 GB/s direct-attached SAS storage with up to 16 SAS hard drives or SSDs. Both the PowerEdge MX740c and the PowerEdge MX840c compute sleds can share drives with the PowerEdge MX5016s sled using the PowerEdge MX5000s SAS module.
Figure 7. Dell EMC PowerEdge MX5016s storage sled Dell EMC PowerEdge MX9002m management module The Dell EMC PowerEdge MX9002m management module controls the overall chassis power, cooling, and hosts the OpenManage Enterprise-Modular (OME-M) console. Two external 1G-BaseT Ethernet ports are provided to enable management connectivity and to connect more PowerEdge MX7000 chassis into a single logical chassis. The PowerEdge MX7000 chassis supports two PowerEdge MX9002m management modules for redundancy.
Dell EMC Networking MX9116n Fabric Switching Engine The Dell EMC Networking MX9116n Fabric Switching Engine (FSE) is a scalable, high-performance, low latency 25 GbE switch purpose-built for the PowerEdge MX platform. The MX9116n FSE provides enhanced capabilities and cost-effectiveness for enterprise, mid-market, Tier 2 cloud, and Network Functions Virtualization (NFV) service providers with demanding compute and storage traffic environments.
5. Two QSFP28-DD fabric expander ports NOTE: The MX7116n FEM cannot act as a stand-alone switch and must be connected to the MX9116n FSE to function. Dell EMC Networking MX5108n Ethernet switch The Dell EMC Networking MX5108n Ethernet switch is targeted at small PowerEdge MX7000 deployments of one or two chassis. Although not a scalable switch, it still provides high-performance and low latency with a non-blocking switching architecture.
1. 2. 3. 4.
6 Physical layout There are multiple configurations of Cloud Foundation on PowerEdge MX7000 chassis that are described in this document. The Cloud Foundation software addresses the host servers using their IP Address. Deploying compute sleds across multiple PowerEdge MX7000 chassis has no impact on the software as long as the networking is configured properly on the Networking IO modules and the switches to which the PowerEdge MX7000 chassis connects.
Figure 14. Single PowerEdge MX7000 with MX5016s storage sled Option 3—two PowerEdge MX7000 enclosures ● Two Dell EMC PowerEdge MX7000 enclosures ● Four Dell EMC PowerEdge MX740c compute sleds ● Four Dell EMC Networking MX5108n Ethernet switches Figure 15.
Figure 16.
Figure 17. Two PowerEdge MX7000 enclosures using Fabric Switching Engine Cabling PowerEdge MX5016s storage sleds are internally cabled and the PowerEdge MX5000s SAS IOM has no impact on external cabling. Cabling for a dual PowerEdge MX7000 enclosure configuration using Fabric Switching Engines The following figures show the external cabling for a multiple PowerEdge MX7000 enclosure configuration when the MX9116n Fabric Switching Engines and MX7116n Fabric Expansion Modules are used.
Figure 18.
Figure 19.
Figure 20.
7 Cloud Foundation and SDDC design considerations VMware Cloud Foundation relies on a set of key infrastructure services to be made available externally. You must configure these external services before you begin deployment. NOTE: This section is universal for Cloud Foundation deployments regardless of hardware platform. The content in this section is also available in the VMware Cloud Foundation Planning and Preparation Guide, and is included here for reference.
NOTE: If you plan to deploy vRealize Automation, Active Directory services must be available. For more information on AD configuration, see the vRealize Automation documentation. Dynamic Host Configuration Protocol Cloud Foundation uses Dynamic Host Configuration Protocol (DHCP) to automatically configure each VM kernel port of an ESXi host that is used as a TEP with an IPv4 address. One DHCP scope must be defined and made available for this purpose.
Network pools Cloud Foundation uses a construct that is called a network pool to automatically configure VM kernel ports for vSAN, NFS, and vMotion. Cloud Foundation uses an Internet Protocol Address Management (IPAM) solution to automate the IP configuration of VM kernel ports for vMotion, vSAN, and NFS (depending on the storage type being used). When a server is added to the inventory of Cloud Foundation, it goes through a process called host commissioning.
● Virtual infrastructure layer—components that provide for the basic foundation of the Cloud Foundation solution. ● Operations management layer—components used for day-to-day management of the environment, for example, vRealize Operations. ● Cloud management layer—services that use the infrastructure layer resources, for example, vRealize Automation.
Table 5. Configuration for the virtual infrastructure layer (continued) Workload Domain 32 Hostname DNS Zone IP Address Description sddc17-m01-nsx04 osevcf17.local 100.71.101.134 NSX-T Virtual Appliance 3 vcfmgmthost01 osevcf17.local 100.71.101.171 Management Host 1 vcfmgmthost02 osevcf17.local 100.71.101.172 Management Host 2 vcfmgmthost03 osevcf17.local 100.71.101.173 Management Host 3 vcfmgmthost04 osevcf17.local 100.71.101.
8 Networking requirements This section covers the networking requirements from both the Cloud Foundation software perspective and from a networking hardware connectivity perspective. This section also briefly describes the configuration options for configuring networks on a Dell EMC PowerEdge MX7000 chassis. The actual networking configuration procedures are described in the later sections.
assigning tagging rules, QoS, and other settings typical to this type of deployment. The other approach is to use Dell EMC SmartFabric which uses a simplified, reusable approach to configure the switches on the PowerEdge MX7000 chassis and assign those configurations to the compute resources in the chassis. The advantage of using the manual configuration method is that every aspect of the switch configuration is available.
separate network connections that are managed by the Cloud Foundation stack. Due to the limited ports available for uplink the MX5108n should not be used where NSX-T edge node capabilities are desired. Deploying Dell EMC Networking MX9116n Fabric Switching Engines and Dell EMC Networking MX7116n Fabric Expansion Modules is a different process. The FSEs are installed into the A1 fabric of the first chassis and the A2 fabric of the second chassis.
9 Manual switch configuration This section describes the configuration of the MX9116n FSEs (Fabric Switching Engines) switches. Each PowerEdge MX7000 has one MX9116n and one MX7116n in the A fabric. The MX9116n in chassis 1 should be placed in the A1 slot and the MX9116n in chassis 2 should be placed in the A2 slot. This distributes the fabric's switching engines across both chassis. In the event of a loss of one of the MX9116n, only one half of the fabric is impacted.
interface vlan2712 description 2712-Uplink2 no shutdown mtu 9216 interface vlan2713 description Edge-Overlay no shutdown mtu 9216 Uplink and VLTi ports VLT synchronizes Layer 2 table information between two switches and enables them to display as a single logical unit from outside the VLT domain. The VLT interconnect (VLTi) between two Dell EMC Networking MX9116N switches is a port group that is generated by configuring a VLT domain and specifying the discovery interfaces.
Role priority : VLT MAC address : IP address : Delay-Restore timer : Peer-Routing : Peer-Routing-Timeout timer : VLTi Link Status port-channel1000 : 32768 aa:bb:cc:11:11:11 fda5:74c8:b79e:1::2 90 seconds Disabled 0 seconds up VLT Peer Unit ID System MAC Address Status IP Address Version ---------------------------------------------------------------------------------1 3c:2c:30:80:aa:80 up fda5:74c8:b79e:1::1 2.
Here are the port channels that are created on switch two: MX9116-A1# show running-configuration interface port-channel 1 ! interface port-channel1 description "Uplink to DataCenter" no shutdown switchport mode trunk switchport access vlan 1 switchport trunk allowed vlan 96, 1711-1714, 2711-2713 mtu 9216 vlt-port-channel 1 Configure the host facing ports To support multiple VLANs, you must place the server facing ports in trunk mode.
10 SmartFabric network configuration The PowerEdge MX9002m management module hosts the OpenManage Enterprise Modular (OME-M) console. Creation and deployment of SmartFabric topologies is facilitated using the OME-Modular console in conjunction with the MX9116n switch operating system. SmartFabric is a web-based mechanism to create a reusable networking template that can be applied to a PowerEdge MX7000 chassis, the IO modules (switches) and the compute sleds.
Steps 1. After the PowerEdge MX9002m modules have been cabled together, log in to the OME-Modular web interface of the chassis that will be the lead chassis of the new chassis group. 2. From the Chassis Overview menu, click Configure, and then select Create Chassis Group. 3. Enter the group name and group description. NOTE: The group name must be one word without any spaces. 4. Select chassis onboarding permissions that propagate to each chassis that is added to the group, and then click Next.
e. Click Finish. 5. Repeat steps 1-4 to create the remaining six VLANs and any other VLANs required. A sample completed configuration is shown in the following figure: Figure 24. VLAN configuration Create SmartFabric Creation of the SmartFabric depends on the IOM selected and the number of PowerEdge MX7000 chassis to be installed.
Figure 25. Create SmartFabric using MX9116n Fabric Switching Engine IOMs 6. From the Chassis-X list, select the first PowerEdge MX7000 chassis containing an MX9116n FSE. 7. From the Switch-A list, select Slot-IOM-A1. 8. From the Switch-B list, select Slot-IOM-A2. 9. Click Next. 10. On the Summary page, verify the proposed configuration, and then click Finish. The fabric displays a health error which is resolved in the next section by adding uplinks to your fabric.
Figure 26. Topology of the SmartFabric using MX9116n Fabric Switching engine IOMs Configure jumbo frames About this task Cloud Foundation requires jumbo frames on all links. NOTE: By default, SmartFabric does not configure the jumbo MTU (frame size) on switch ports. To configure jumbo frames, set the MTU (frame size) using the following procedure: Steps 1. From the Devices menu, click I/O Modules. 2. Select the IO Module. 3. From the IOM banner menu, click Hardware. 4. Click Port Information. 5.
Steps 1. Open the OME-M console. 2. From the Configuration menu, click Deploy. 3. In the Center pane, click Create Template, and then click From Reference Device. 4. In the Create Template window, complete the following steps: a. b. c. d. e. f. g. h. In the Template Name box, enter MX740c with Intel mezzanine. Optionally, enter the description in the Description box. Click Next In the Device Selection pane, click Select Device. In the Select Devices window, select Sled-1 from Chassis-1 Click Finish.
Results The interfaces on the switches are updated automatically. SmartFabric configures each interface with an untagged VLAN and tagged VLANs.
11 Deploy ESXi to cluster nodes Only perform the steps listed in this section if the compute sleds were not pre-installed with ESXi 7.0. If the compute sleds have been pre-installed with ESXi 7 jump ahead to Configure ESXi settings—using DCUI. Below are the steps to install VMware ESXi on each of the PowerEdge MX740c hosts that are part of the management cluster. This guide covers the steps to install VMware ESXi remotely using iDRAC Virtual Console with Virtual Media.
NOTE: When the virtual console is launched for the first time, repeating this step may be necessary due to browser pop-up blocker. 6. The mapping screen for the virtual media is displayed on the Virtual Media menu. 7. In the Map CD/DVD section, click Choose File. 8. Browse and select the required Dell EMC customized ESXi image (ISO image) file. 9. Click Map Device and then click Close. 10. From the Virtual Console menu, click Boot, and then click Virtual CD/DVD/ISO. 11. Click Yes. 12.
Results VMware ESXi is installed in the server. Configure ESXi settings—using DCUI About this task The Direct Console User Interface (DCUI) is a menu-based interface that is accessed from the host console and used to configure ESXi running on vSphere hosts. Steps 1. After the server reboots and fully loads ESXi, press F2 to log in to the DCUI. 2. Enter the credentials that were created during the ESXi installation, and then press Enter. 3.
Configure ESXi settings using web interface Prerequisites Before configuring ESXi settings using web interface, you must configure the ESXi settings using DCUI. For more information, see Configure ESXi settings—using DCUI. Steps 1. Using a web browser, go to the ESXi host-level management web interface at https:///ui. 2. Enter the credentials that were created during the ESXi installation, and then click Log in. 3. In the Navigator pane, click Networking.
Figure 30. ESXi web interface—Edit time configuration page 11. In the Manage pane, select the Services tab. The resulting page is as shown in the following figure: Figure 31. ESXi settings web interface—Manage pane 12. Right-click on the ntpd service and set the policy to Start and stop with the host. Next steps Once the policy is set, start the ntpd service. If the ntpd service is already running, restart the service.
12 Cloud Builder and SDDC deployment The primary software installation tool for Cloud Foundation 4.x is Cloud Builder. It is delivered as a virtual appliance in the standard OVA format. This section describes the steps to deploy the OVA. The Cloud Builder VM is a temporary tool to facilitate deployment of Cloud Foundation. It can be discarded after the deployment.
Figure 32. OVF customize template page 11. Review the Ready to Complete final configuration page, and then click Finish. 12. In the Recent Tasks pane, check the OVA deployment status. When the OVA deployment is complete start the Cloud Builder VM.. Check Time Synchronization After the Cloud Builder VM is started, it takes some time to for all the services to start and for time synchronization to complete.
13 VCF Deployment using Cloud Builder In the previous section, you deployed the Cloud Builder virtual appliance. In this section, the software within the virtual machine is used to validate the target environment and deploy the entire Cloud Foundation stack. NOTE: Before proceeding with the Cloud Builder validation process, take a snapshot of your Cloud Builder VM.
Figure 33. Cloud Builder web interface 3. Log in using the credentials that you specified during OVA deployment. 4. Click Check All to review the checklist of pre-bring-up steps and confirm that all the steps that are completed, and then click Next. 5. Review the EULA, and if you agree, click Agree to End User License Agreement, and then click Next. 6. If you have not obtained and completed the Cloud Foundation Information Spreadsheet, click Download Deployment Parameter Sheet.
Management Workload tab License keys are required for the following items: ● ESXi hosts ● vSAN ● vCenter ● NSX-T ● SDDC Manager Appliance Users and Groups tab In the Users and Groups tab, you can set the passwords for your initial Cloud Foundation components. CAUTION: Do not make a mistake on this page because if any of the passwords do not meet the indicated specifications, you must redeploy your Cloud Builder VM, unless you elected to create a snapshot after you created your VM.
Steps 1. Turn on Cloud Builder VM. 2. The VM must finish booting up and load the application stack. 3. Using a web browser, go to the Cloud Builder web interface at https://. 4. Log in using the credentials that you specified during OVA deployment. 5. Review the EULA, and if you agree, click Agree to End User License Agreement, and then click Next. 6. Select VMware Cloud Foundation. Be sure not to select VMware Cloud Foundation on VxRAIL. 7.
Steps 1. After you have completed the deployment parameter spreadsheet, click Upload, select the file, and then click Open. A message is displayed to acknowledge successful upload of the parameter sheet. 2. Click Validate and monitor progress on the Configuration File Validation page. Figure 35. Configure Cloud Builder validation NOTE: Validation may take 15 minutes or more. However, if there are issues such as the DNS server being down or if you provided a wrong IP address, validation may take longer.
NOTE: If an unrecoverable failure occurs during SDDC bring-up, it is necessary to resolve the root cause issue, wipe the Cloud Foundation target servers including all disk partitions, and then restart from the Deploy ESXi to cluster nodes section. 3. The SDDC bring-up process is completed when the Cloud Builder reports that the SDDC has been successfully created. When the SDDC bring-up process is complete, this indicates that the Cloud Foundation is successfully deployed. 4.
14 Post-install validation Topics: • Cloud Foundation Cluster Verification Cloud Foundation Cluster Verification After installing Cloud Foundation, perform the steps in the following sections to verify that the components are installed and available. SDDC Manager Log in to SDDC Manager using a web browser at: https://. The SSO user ID is administrator@vsphere.local and the password is the one you specified during installation. NOTE: Use the domain vsphere.local.
Validate that the VMs created during deployment are all up and running. Validate that your VSAN is running and available. Look at the disk groups that are created and ensure they are consistent with the disks available in the hosts. Look at the virtual networking components that are deployed to vCenter. Figure 38. vCenter dashboard NSX Manager Log into the NSX Manager through a web browser using the Admin credentials set in your parameter sheet. Figure 39.