DELL POWEREDGE VRTX AND M-SERIES COMPUTE NODES CONFIGURATION STUDY When deploying or upgrading IT infrastructure for existing office space or a new remote office, businesses face real challenges finding a solution that is easy to configure, provides the necessary hardware resources, is highly available, and mitigates unnecessary maintenance time and complexity.
WHY UPGRADE TO THE DELL POWEREDGE VRTX? The drawbacks of a legacy hardware solution are many. Configuring this solution is time-consuming and may cause prolonged downtime during the setup phase. The aging hardware components may fail when you try to power them, you will likely have to reinstall operating systems from scratch, and compatibility issues often arise. If the legacy hardware lacks built-in redundancy or high-availability features, a failed component can cause unexpected downtime.
Our test scenario simulates setting up a high-availability cluster at a remote site with functionality similar to what the Dell PowerEdge VRTX can provide. We compare this with a legacy hardware solution that used a variety of repurposed hardware to meet a similar goal.
An added challenge of repurposing older hardware is that you are more likely to experience issues with the hardware components from the very beginning. In our testing, the time spent configuring the legacy hardware solution included approximately 1 hour allocated to troubleshooting hardware issues, which included memory errors, boot drive errors, and installing an additional NIC into the legacy tower servers. Ongoing reliability is a concern with older hardware.
EASIER TO MANAGE As Figure 4 shows, deploying the Dell PowerEdge VRTX used just a single management tool, while the legacy hardware solution required six separate vendorspecific management tools, Web-based GUIs, or direct physical connections. Figure 4: Comparison of the Dell PowerEdge VRTX management vs. management with a legacy hardware solution. A piecemeal or separate component solution means dealing with multiple management tools, each potentially from different vendors.
management software provides administrators with an intuitive interface that lets them monitor device status, update firmware and drivers, create their own command-line tasks, and handle other management functions, all from a single portal. Dell OME is free to download from the Dell Support Site.
For more information about the Dell PowerEdge VRTX, visit www.dell.com/poweredge. About the Dell PowerEdge M620 compute nodes The Dell PowerEdge M620, a half-height compute node, has features optimized for performance, density, and energy efficiency. Processors. The Dell PowerEdge M620 is powered by two Intel® Xeon® E52600-series processors, which incorporate the very latest in processor technology from Intel. The powerful processors provide the performance you need for your essential mainstream tasks.
To learn more about VMware vSphere 5, visit www.vmware.com/products/vsphere/overview.html. IN CONCLUSION When considering whether to upgrade to the new Dell PowerEdge VRTX or repurpose older hardware, the advantages of new hardware are clear. Not only do you get newer hardware that is faster and is better-equipped to handle the increasing demands of today’s business applications and workloads, but you also benefit from advances that make deployment and management easier than ever.
APPENDIX A – DETAILED CONFIGURATION INFORMATION Figure 5 provides detailed configuration information about the Dell PowerEdge VRTX solution we set up.
System System power management policy CPU Vendor Name Stepping Socket type Core frequency (GHz) L1 cache L2 cache L3 cache Memory modules (per node) Total RAM in system (GB) Vendor and model number Type Speed (MHz) Speed in the system currently running @ (MHz) Timing/latency (tCL-tRCD-iRP-tRASmin) Size (GB) Number of RAM modules Chip organization RAID controller Vendor and model number Firmware version Cache size (GB) Hard drive Vendor and model number Number of drives Size (GB) RPM Type Network adapter Ven
Figures 6 provides detailed configuration information about the tower systems in our legacy hardware solution. HP ProLiant ML310 G5 14.4 6.9 16.8 4 17.0 7.9 24.0 5 17.5 8.6 28.
Server Vendor and model number Type Speed (MHz) Speed in the system currently running @ (MHz) Timing/latency (tCL-tRCDiRP-tRASmin) Size (GB) Number of RAM modules Chip organization RAID controller Vendor and model number Cache size (MB) Hard drive #1 Vendor and model number Number of drives Size (GB) RPM Type Hard drive #2 Vendor and model number Number of drives Size (GB) RPM Type Network adapter Vendor and model number Type Number of ports Network adapter 2 HP ProLiant ML110 G6 Samsung M391B5673EH1CH9 PC
Figures 7 provides detailed configuration information about the storage and switch in our legacy hardware solution.
APPENDIX B – CONFIGURING THE DELL POWEREDGE VRTX SOLUTION Configuring the VRTX network 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. Open a Web browser, and enter the address listed for the CMC IP on the front LCD display. Log in with the appropriate credentials. Expand I/O Module Overview. Click Gigabit Ethernet. Click the Properties tab. Click the Launch I/O Module GUI button. Log in with the username root and the password calvin.
b. Click OK to confirm the message box indicating Operation Successful. 6. Configure virtual disks: a. Click StorageVirtual Disks. b. On the Virtual Disks tab, click Create. i. For Choose a virtual disk type, select the appropriate RAID level. For our testing, we selected RAID 10. ii. Scroll down to select the appropriate physical disks (for our testing 0:0:0 – 0:0:3). iii. Accept the default size, and click Create Virtual Disk. iv. Click OK to confirm the message box indicating Operation Successful. c.
6. 7. 8. 9. 10. 11. 12. b. Click Apply. c. To confirm server control action, click OK. d. To confirm operation was successful, click OK. Click the Properties tab. Click Launch Remote Console. On new browser page, click Continue to website (not recommended) if prompted. a. If a message appears indicating a pop-up was blocked, select Always allow pop-ups from this site. b. Close the browser tab for the iDRAC. c. Click Launch Remote Console. d.
Configuring VRTX vSphere environment We performed all downloads prior to executing these steps. Download times will vary. 1. Configure Slot 1 host: a. Open a new vSphere client session, and connect to the IP address for the server in Slot 1. i. Log in with root and the password you configured during ESXi installation. b. Add shared storage. i. Click the Host configuration tab. ii. Click the Storage menu. 1. Click Add Storage… 2. Select Disk/LUN, and click Next. 3. Select Local Dell Disk, and click Next. 4.
4. Assign the network label as vMotion, check the box for Use this port group for vMotion, and click Next 5. Enter the IP address and subnet mask. For our testing, we used 192.168.0.51 for the IP address and255.255.255.0 for the subnet mask. Click Next. ii. Click Finish to create the network. 3. Install and configure vCenter Server: a. Install the vCenter appliance: i. In the vSphere client, select FileDeploy OVF Template. ii. Browse to the location of the vCenter Server Appliance .
8. Do not check Enable Lockdown Mode, and click Next. 9. Click Next. 10. Click Finish. 11. Repeat steps 1 through 10 for the ESXi host in Slot 2. 4. Set up a high availability cluster: a. Create a vSphere HA cluster: i. Right-click the VRTX-01 Datacenter. ii. Select New Cluster. iii. Name the cluster. For our testing, we used VRTX-01-C1. iv. Check the box for Turn On vSphere HA, and click Next. v. Accept all vSphere HA defaults, and click Next. vi. Accept Virtual Machine Options defaults, and click Next.
APPENDIX C – CONFIGURING THE TRADITIONAL ENVIRONMENT 1. Set up network: a. Connect power. b. Connect to switch Web interface: i. Attach to switch with a network cable. ii. Assign the workstation NIC IP address 192.168.0.25. iii. Open Web browser 192.168.0.239 (the default address). iv. In the password field, type password and press Enter. c. Configure vMotion VLAN: i. In the left pane, click VLAN. ii. Select the radio button for IEEE802.1q VLAN and click OK. iii.
f. Connect monitor and keyboard to server. 3. Configure the HP ML110 G6 hardware: a. Power on the server. b. Insert installation media into optical media drive. c. Change the boot disk order in System Setup: i. When prompted during POST, press F10 to enter System Setup. ii. Select Advanced. 1. Select Processor Power Efficiency, and press Enter. 2. Select Performance, and press Enter. iii. Select Boot from the top menu. iv. Change the boot order. 1. CD-ROM Drive 2. Removable Devices 3. Hard Drive a. USB b.
ii. Select Enable SSH, and press Enter. h. Press Esc to log out. i. Open a vSphere client on the management workstation and connect to the IP address assigned to the ESX host. i. Log in with the appropriate credentials. ii. Check the box for install this certificate… and click Ignore. iii. Click OK to dismiss the license message. iv. Click the configuration tab for the ESX host. j. Configure vMotion network: i. Connect an unused network adapter port on the host to an available port on the switch.
c. Close case. d. Connect power. e. Connect an unused network adapter port on the server to an unused port on the switch. Use only ports numbered 1 through 12. f. Connect monitor and keyboard to server. 7. Begin ESXi host configuration: a. Power on the server. b. Insert installation media into optical media drive. c. Change the boot disk order in System Setup. i. When prompted during POST, press F9 to enter System Setup. ii. Select System Options, and press Enter. 1.
1. Select the network adapter with the connected status, and press Enter. 2. Select DNS configuration. a. Select Use the following DNS server address and hostname: b. Type a new hostname (example: HP-ESX02), and press Enter. 3. Press Esc. 4. Press Y to apply changes, and restart management network. 5. Note the IP address assigned to the ESX host. iv. Select Troubleshooting Options. 1. Select EnableESXi Shell, and press Enter. 2. Select Enable SSH, and press Enter. v. Press Esc to log out. f.
iii. Click the Storage Adapters menu. 1. Click Add. a. Click OK to Add Software iSCSI Adapter. b. Click OK to confirm. 2. Select vmhba36 (iSCSI). Copy the iSCSI name (WWN number) from the details pane and paste into a notepad file for later retrieval. 8. Prepare the HP ML310 G5 ESXi host: a. Open case of tower. b. Locate internal USB port and insert 4GB USB drive. c. Close case. d. Connect power. e. Connect an unused network adapter port on the server to an unused port on the switch.
iv. v. vi. vii. Select US Default Keyboard Layout, and press Enter to confirm selection. Type the new root password and confirm it by typing it again. Press Enter. Press F11 to begin installation. After completion, press Enter to reboot, remove the installation media, and wait for return to service. e. When ESXi finishes loading, press F2. i. Log in with appropriate credentials. ii. Select Configure Management Network. 1. Select the network adapter with the connected status, and press Enter. 2.
1. 2. 3. 4. 5. Click Add Networking… Select VMkernel, and click Next. Select the unused connected network adapter, and click Next. Assign the network label as Storage. Leave all checkboxes unchecked. Click Next. Enter an appropriate IP address and subnet mask. For our testing, we used 192.168.1.152 for the IP address with a subnet mask of 255.255.255.0. Click Next. 6. Click Finish to create the network. iii. Click the Storage Adapters menu. 1. Click Add. a. Click OK to Add Software iSCSI Adapter. b.
2. Select Intel® Virtualization Technology, and press Enter. 3. Select Enabled, and press Enter. v. Press Esc four times to exit the setup utility. vi. Press F10 to confirm exit utility. The system will restart and boot to the ESXi installer media. d. Install ESXi: i. Press Enter to begin install when prompted. ii. Press F11 to accept license agreement. iii. Select USB drive for installation target, and press Enter to confirm selection. iv.
4. Assign the network label as vMotion, check the box for Use this port group for vMotion, and click Next. 5. Enter the appropriate IP address and subnet mask. For our testing, we used 192.168.0.153 for the IP address with a subnet mask of 255.255.255.0. Click Next. 6. Click Finish to create the network. h. Configure the storage network: i. Connect an unused network adapter port to an available port on the switch. Use only switch ports numbered 19 through 24. ii. Click the Networking menu. 1.
d. Leave the defaults for Enable system-management services checked, and click Next. e. Provide system information, or accept the defaults, and click Next. f. Accept the defaults for event notification, and click Next. g. Provide IP addresses for the iSCSI ports. For our testing, we used 192.168.1.250, 192.168.1.251, 192.168.1.252, and 192.168.1.253 with a subnet mask of 255.255.255.0 and 0.0.0.0 for the gateway. h.
6. In the right pane, select ProvisioningManage Host Mappings. a. Select the entry for vd01. i. Check the box for Map. Click Apply. b. Click OK to confirm mapping. 14. Complete the ESXi host configuration: a. Switch to the open vSphere client session. b. Configure the iSCSI storage adapter: i. In the Storage Adapter details setting, click Properties. 1. Click the Dynamic Discovery tab. a. Click Add. b. Enter the address of an iSCSI adapter on the storage array. For our testing, we used 192.168.1.250.
vi. Select the radio button for Configure with default settings, and click Next. vii. Click Start. Setup will complete and a new database will be configured automatically. viii. Click the Admin Tab. 1. Enter vmware in the current administrator password section. 2. Enter a new password into both password fields. 3. Click Change Password. c. Add all hosts to the vCenter: i. Open a new vSphere client session, and connect to the IP address assigned to the vCenter Server Appliance during installation. ii.
ABOUT PRINCIPLED TECHNOLOGIES We provide industry-leading technology assessment and fact-based marketing services. We bring to every assignment extensive experience with and expertise in all aspects of technology testing and analysis, from researching new technologies, to developing new methodologies, to testing with existing and new tools. Principled Technologies, Inc. 1007 Slater Road, Suite 300 Durham, NC, 27703 www.principledtechnologies.