Proven Infrastructure EMC® VSPEX™ with Brocade Networking Solutions for END-USER COMPUTING VMware Horizon View 5.3 and VMware vSphere for up to 2,000 Virtual Desktops Enabled by Brocade VCS® Fabrics, EMC VNX, and EMC Next-Generation Backup EMC VSPEX Abstract This document describes the EMC VSPEX validated with Brocade Networking Solutions for End-User Computing solution with VMware vSphere with EMC VNX for up to 2,000 virtual desktops.
Copyright © 2014 EMC Corporation. All rights reserved. Published in the USA. Published February 2014 EMC believes the information in this publication is accurate of its publication date. The information is subject to change without notice. The information in this publication is provided as is.
Contents Chapter 1 Executive Summary .......................................................... 15 Introduction ........................................................................................................... 16 Audience ............................................................................................................... 16 Document purpose .............................................................................................. 16 Business needs.................................
Contents EMC Virtual Storage Integrator for VMware ................................................ 36 VNX VMware vStorage API for Array Integration Support ......................... 36 Compute layer...................................................................................................... 36 Network .................................................................................................................. 38 File Storage Network with Brocade VDX Ethernet Fabric switches ..........
Contents Server configuration guidelines .......................................................................... 73 Overview ........................................................................................................... 73 vSphere memory virtualization for VSPEX ..................................................... 74 Memory configuration guidelines ................................................................. 75 Brocade Network configuration guidelines..................................
Contents CPU resources ................................................................................................ 100 Memory resources ......................................................................................... 100 Network resources ......................................................................................... 101 Storage resources .......................................................................................... 102 Backup resources ..................................
Contents Step 7: Create the vLAG for VNX ports ...................................................... 134 Step 8: Connecting the VCS Fabric to existing Infrastructure through Uplinks ....................................................................................................... 136 Step 9 Configure MTU and Jumbo Frames (for NFS) ................................. 138 Step 10: Enable Flow Control Support ........................................................ 138 Step 11- Auto QOS for NAS ..........
Contents Install the EMC VSI plug-in ............................................................................ 174 Set Up VMware View Connection Server ....................................................... 174 Overview ......................................................................................................... 174 Install the VMware Horizon View Connection Server ............................... 176 Configure the View Event Log Database connection.............................
Contents Deploy and test a single virtual desktop ......................................................... 187 Verify the redundancy of the solution components ..................................... 187 Provision remaining virtual desktops ................................................................ 188 Appendix A Bills of Materials ................................................................ 191 Bill of Materials for 500 virtual desktops .........................................................
Contents 10 VMware Horizon View 5.
Figures Figure 1. Next-Generation VNX with multicore optimization .................... 23 Figure 2. Active/active processors increase performance, resiliency, and efficiency .................................................................................. 25 Figure 3. New Unisphere Management Suite .............................................. 27 Figure 4. Solution components ...................................................................... 30 Figure 5. Compute layer flexibility ...........
Figures 12 Figure 25. Core storage layout for 1,000 virtual desktops using VNX5400 . 85 Figure 26. Optional storage layout for 1,000 virtual desktops using VNX5400 ............................................................................................ 86 Figure 27. Core storage layout for 2,000 virtual desktops using VNX5600 . 88 Figure 28. Optional storage layout for 2,000 virtual desktops using VNX5600 ..........................................................................................
Tables Table 1. VMX thresholds and settings .......................................................... 49 Table 2. Minimum hardware resources to support SecurID...................... 52 Table 3. OVA virtual applications................................................................. 54 Table 4. Minimum hardware resources to support VMware Horizon data57 Table 5. Recommended EMC VNX storage needed for the Horizon Data NFS share.......................................................................
Executive Summary 14 Table 28. Brocade 6510 FC switch Configuration Steps ........................... 141 Table 29. Brocade switch default settings .................................................. 142 Table 30. Tasks for storage configuration .................................................... 152 Table 31. Tasks for server installation ............................................................ 162 Table 32. Tasks for SQL Server database setup ..........................................
Chapter 1 Executive Summary This chapter presents the following topics: Introduction 16 Audience 16 Document purpose ................................................................................... 16 Business needs ............................................................................................ 17 VMware Horizon View 5.
Executive Summary Introduction VSPEX™ with Brocade networking solutions, validated and modular architectures are built with proven best-of-breed technologies to create complete virtualization solutions that enable you to make an informed decision in the hypervisor, compute, and networking layers. VSPEX eliminates server virtualization planning and configuration burdens.
Executive Summary redundant Brocade network switches and VNX storage family and sufficiently powerful to handle the processing and data needs of a large virtual machine environment. The 500, 1,000, and 2,000 virtual desktop environments discussed are based on a defined desktop workload. While not every virtual desktop has the same requirements, this document contains methods and guidance to adjust your system to be cost effective when deployed.
Chapter 2 Solution Overview This chapter presents the following topics: Solution overview ....................................................................................... 19 Desktop broker ........................................................................................... 19 Virtualization 19 Compute 20 Network 20 Storage 21 VMware Horizon View 5.
Solution Overview Solution overview The EMC VSPEX End-User Computing with Brocade networking solutions for VMware Horizon View on VMware vSphere provides a complete system architecture capable of supporting up to 2,000 virtual desktops with a redundant server/network topology and highly available storage. The core components that make up this particular solution are desktop broker, virtualization, compute, networking, and storage.
Solution Overview Compute VSPEX allows the flexibility of designing and implementing the vendor’s choice of server components.
Solution Overview Brocade 6510 Fibre Channel Fabric is the purpose-built, data centerproven network infrastructure for storage, delivering unmatched reliability, simplicity, and 4/8/16 Gbps performance.
Solution Overview The desktop solutions described in this document are based on the EMC VNX5400™ and EMC VNX5600™ storage arrays respectively. The VNX5400 can support a maximum of 250 drives and the VNX5600 can host up to 500 drives.
Solution Overview optimization and high performance scalability. The EMC Fully Automated Storage Tiering Suite (FAST Cache and FAST VP) tiers both block and file data across heterogeneous drives and boosts the most active data to the cache, ensuring that customers never have to make concessions in cost or performance. FAST VP dynamically absorbs unpredicted spikes in system workloads.
Solution Overview Multicore RAID Another important improvement to the MCx design is how it handles I/O to the permanent back-end storage—hard disk drives (HDDs) and SSDs. The modularization of the back-end data management processing, which enables MCx to seamlessly scale across all processors, greatly increases the performance of the VNX system.
Solution Overview Figure 2. Active/active processors increase performance, resiliency, and efficiency VMware Horizon View 5.
Solution Overview Virtualization management VMware Virtual Storage Integrator Virtual Storage Integrator (VSI) is a VMware vCenter plug-in that is available at no charge for VMware users with EMC storage. VSPEX customers can use VSI to simplify management of virtualized storage. VMware administrators can manage their VNX storage using the familiar vCenter interface. With VSI, IT administrators can do more work in less time.
Solution Overview Figure 3. New Unisphere Management Suite VMware Horizon View 5.
Chapter 3 Solution Technology Overview This chapter presents the following topics: The technology solution ............................................................................ 30 Key components ........................................................................................ 31 Desktop virtualization broker .................................................................... 32 Virtualization layer ......................................................................................
Solution Technology Overview The technology solution This solution uses EMC VNX5400™ (for up to 1,000 virtual desktops) or VNX5600 (for up to 2,000 virtual desktops), Brocade Ethernet Fabric or Connectrix-B Fibre Channel switches, and VMware vSphere to provide the storage and computer resources for a VMware Horizon View environment of Windows 7 virtual desktops provisioned by VMware Horizon View™ Composer.
Solution Technology Overview To provide a predictable performance for an end-user computing solution, the storage system must be able to handle the peak I/O load from the clients while keeping response time to minimum. Designing for this workload involves the deployment of many disks to handle brief periods of extreme I/O pressure, which is expensive to implement. This solution uses EMC VNX FAST Cache to reduce the number of disks required.
Solution Technology Overview Storage A critical resource for the implementation of the End-User Computing environment. Because of the way desktops are used, the storage layer must be able to absorb large bursts of activity as they occur without unduly affecting the user experience. This solution uses EMC VNX FAST Cache to efficiently handle this workload.
Solution Technology Overview Thin provisioning support—VMware Horizon View 5.3 enables efficient allocation of storage resources when virtual desktops are provisioned. This results in better utilization of the storage infrastructure and reduced capital expenditure (CAPEX)/operating expenditure (OPEX). Desktop virtual machine space reclamation—VMware Horizon View 5.3 can reclaim disk space that has been freed up within Windows 7 desktops.
Solution Technology Overview VMware View Persona Management VMware View Persona Management preserves user profiles and dynamically synchronizes them with a remote profile repository. View Persona Management does not require the configuration of Windows roaming profiles, eliminating the need to use Active Directory to manage View user profiles.
Solution Technology Overview Virtualization layer VMware vSphere VMware vSphere is the market-leading virtualization platform that is used across thousands of IT environments around the world. VMware vSphere transforms a computer’s physical resources by virtualizing the CPU, Memory, Storage, and Network. This transformation creates fully functional virtual desktops that run isolated and encapsulated operating systems and applications just like physical computers.
Solution Technology Overview EMC Virtual Storage Integrator for VMware EMC Virtual Storage Integrator (VSI) for VMware vSphere is a plug-in to the vSphere client that provides a single management interface that is used for managing EMC storage within the vSphere environment. Features can be added and removed from VSI independently, which provides flexibility for customizing VSI user environments. Use the VSI Feature Manager to manage the features.
Solution Technology Overview customer might want to implement this by using white-box servers containing 16 processor cores, and 64 GB of RAM. A second customer chooses a higher-end server with 20 processor cores and 144 GB of RAM. Figure 5 depicts this example. Figure 5. Compute layer flexibility You should observe the following best practices in the compute layer: Use a number of identical or at least compatible servers.
Solution Technology Overview Implement the high-availability features available in the virtualization layer to ensure that the compute layer has sufficient resources to accommodate at least single server failures. This allows you to implement minimal-downtime upgrades and tolerate single unit failures. Within the boundaries of these recommendations and best practices, the compute layer for EMC VSPEX is flexible enough to meet your specific needs.
Solution Technology Overview – active links for all traffic from the virtualized compute servers to the EMC VNX storage arrays. The Brocade VDX provides a network with high availability and redundancy by using link aggregation for EMC VNX storage array. Figure 6 depicts an example of the Brocade network topology for file based storage. Figure 6.
Solution Technology Overview Ethernet port, the link fails over to another port. All network traffic is distributed across the active links. Brocade VDX Ethernet Fabric Virtualization Automation Support FC Block Storage Network with Brocade 6510 Fibre Channel switch Brocade VDX with VCS Fabric technology offers unique features to support virtualized server and storage environments.
Solution Technology Overview Figure 7. Example of Highly-Available Brocade network design – for FC block storage network Brocade 6510 Fibre Channel switches supports provide high availabity for the VSPEX SAN infrastructure. Active – active links for all traffic from the virtualized to compute servers to the EMC VNX storage arrays.
Solution Technology Overview Storage Overview The storage layer is a key component of any Cloud Infrastructure solution that serves data generated by applications and operating systems in a datacenter storage processing system. In this VSPEX solution, EMC VNX Series storage arrays are used to provide virtualization at the storage layer. This increases storage efficiency, management flexibility, and reduces total cost of ownership.
Solution Technology Overview Application Protection Suite—Automates application copies and proves compliance. Security and Compliance Suite—Keeps data safe from changes, deletions, and malicious activity. Software packs available Total Efficiency Pack—Includes all five software suites. Total Protection Pack—Includes local, remote, and application protection suites. EMC VNX Snapshots VNX Snapshots is a software feature that creates point-in-time data copies.
Solution Technology Overview A checkpoint reflects the state of a PFS at the time the checkpoint is created. SnapSure supports the following checkpoint types: Read-only checkpoints—Read-only file systems created from a PFS Writeable checkpoints—Read/write file systems created from a read-only checkpoint SnapSure can maintain a maximum of 96 read-only checkpoints and 16 writeable checkpoints per PFS, while allowing PFS applications continued access to real-time data.
Solution Technology Overview Figure 8. Storage pool rebalance progress LUN expansion Use pool LUN expansion to increase the capacity of existing LUNs. It allows for provisioning larger capacity as business needs grow. The VNX family enables you to expand a pool LUN without disrupting user access. You can expand a pool LUN with a few simple clicks and the expanded capacity is immediately available. However, you cannot expand a pool LUN if it is part of a data-protection or LUN-migration operation.
Solution Technology Overview Alerting the user through the Capacity Threshold setting You must configure proactive alerts when using a file system or storage pools based on thin pools. Monitor these resources so that storage is available for provisioning when needed to avoid capacity shortages. Figure 8 explains why provisioning with thin pools requires monitoring. Figure 9.
Solution Technology Overview Figure 10. Examining storage pool space utilization When storage pool capacity becomes exhausted, any requests for additional space allocation on thin-provisioned LUNs fail. Applications attempting to write data to these LUNs usually fail as well, and an outage is the likely result. To avoid this situation, you should: 1. Monitor pool utilization. 2. Set an alert that notifies you when thresholds are reached. 3.
Solution Technology Overview Figure 11. Defining storage pool utilization thresholds Figure 12 shows the Unisphere Event Monitor Wizard, where you can view alerts. From this screen, you can also select the option to receive alerts through email, a paging service, or an SNMP trap. Figure 12. 48 Defining automated notifications for block VMware Horizon View 5.
Solution Technology Overview Table 1 lists the information about thresholds and their settings for VNX Operating Environment (OE) for Block Release 33. Table 1.
Solution Technology Overview better performance and lower latency. In these environments, IT departments need to balance the benefits of local support with the need to maintain central control. Local systems and storage should be easy for local personnel to administer, but also support remote management and flexible aggregation tools that minimize the demands on those local resources. With VSPEX, you can accelerate the deployment of applications at remote offices and branch offices.
Solution Technology Overview SecurID Authentication in the VSPEX EndUser Computing for VMware Horizon View Environment SecurID support is built into VMware Horizon View, providing a simple activation process. Users accessing a SecurID-protected View environment will be initially authenticated with a SecurID passphrase, followed by normal authentication against Active Directory.
Solution Technology Overview RSA SecurID Authentication Manager (version 8.0)—Used to configure and manage the SecurID environment and assign tokens to users, Authentication Manager 8.0 is available as a virtual appliance running on VMware ESXi. SecurID tokens for all users—SecurID requires something the user knows (a PIN) with a constantly changing code from a “token” the user has in possession.
Solution Technology Overview Other components VMware vShield Endpoint VMware vShield Endpoint offloads virtual desktop antivirus and antimalware scanning operations to a dedicated secure virtual appliance delivered by VMware partners.
Solution Technology Overview Dynamic thresholds and smart alerts that notify administrators early in the process and provide more-specific information about impending performance issues Automated root-cause analysis, session lookup, and event correlation for faster troubleshooting of end-user problems Integrated approach to performance, capacity and configuration management that supports holistic management of VDI operations Design and optimizations specifically for VMware Horizon View Availabili
Solution Technology Overview Application Description Gateway (gateway-va) The Gateway appliance enables a single, userfacing domain access to Horizon Workspace. As the central aggregation point for all user connections, the Gateway routes requests to the appropriate destination and proxies requests on behalf of user connections. Figure 15. Horizon workspace architecture layout VMware Horizon View 5.
Solution Technology Overview Using Horizon data with VSPEX architectures The VSPEX End-User Computing for VMware View environment with added infrastructure supports Horizon Data as depicted in Figure 16. You specify server capacity in generic terms for minimum CPU and memory requirements. The customer is free to select the server and networking hardware that meets or exceeds the stated minimum requirements. The recommended storage delivers a highly available architecture for your Horizon Data deployment.
Solution Technology Overview Server requirements Table 4 details the minimum supported hardware requirements of each virtual appliance in the VMware Horizon Workspace vApp. Table 4.
Solution Technology Overview Table 5. Recommended EMC VNX storage needed for the Horizon Data NFS share NFS shares for: Configuration 500 users: Two data movers Provided that each (active/standby CIFS variant user has 10 GB of only) private storage Eight 2 TB, 7,200 rpm 3.5-inch space NL-SAS disks 1,000 users Two Data Movers (active/standby CIFS variant only) Sixteen 2 TB, 7,200 rpm 3.
Chapter 4 Solution Architectural Overview This chapter presents the following topics: Solution overview ....................................................................................... 60 Solution architecture ................................................................................. 60 Server configuration guidelines ............................................................... 73 Brocade Network configuration guidelines ...........................................
Solution Architectural Overview Solution overview VSPEX Proven Infrastructure solutions with Brocade networking are built with proven best-of-breed technologies to create a complete virtualization solution that enables you to make an informed decision when choosing and sizing the hypervisor and compute layers. VSPEX eliminates many server virtualization planning and configuration burdens by leveraging extensive interoperability, functional, and performance testing by EMC.
Solution Architectural Overview Two networks are in use: one, Brocade storage network for carrying virtual desktop and virtual server operating system (OS) data and one 10 Gb Ethernet for carrying all other traffic. The Brocade storage network can use 8 or 16 Gb FC, 10 Gb Ethernet with FCoE, or 10 Gb Ethernet with iSCSI protocol. Figure 17 shows the logical architecture of implementation in block storage. Figure 17.
Solution Architectural Overview Figure 18 shows the file storage logical architecture. The 10 GbE IP network carries all traffic. Figure 18. Logical architecture for NFS storage Note: This solution also supports 1 Gb Ethernet if the bandwidth requirements are met. Key components VMware Horizon View Manager Server 5.3—Provides virtual desktop delivery, authenticates users, manages the assembly of users' virtual desktop environments, and brokers connections between users and their virtual desktops.
Solution Architectural Overview VMware vSphere—Provides a common virtualization layer to host a server environment that contains the virtual machines. The specifics of the validated environment are listed in Table 6. VSphere provides a highly available infrastructure through such features as: vMotion—Provides live migration of virtual machines within a virtual infrastructure cluster with no virtual machine downtime or service disruption.
Solution Architectural Overview DNS Server—Required for the various solution components to perform name resolution. The Microsoft DNS Service running on a Windows Server 2012 server is used for this purpose. Active Directory Server—Services that are required for the various solution components to function properly. The Microsoft AD Directory Service running on a Windows Server 2012 server is used for this purpose.
Solution Architectural Overview Storage Network for File: With file-based storage, a10Gb Ethernet private network carries the storage traffic. A Brocade 10 Gb Ethernet Fabric network enables the transport of File for NFS and CIFS storage network. Brocade VDX 6740 Ethernet Fabric Switch — Provides efficient, easy to configure, resiliency that scales from 24 to 64 Port on Demand (PoD) at 1 GbE or 10GbE for file attached VNX5400, VNX 5600 and VNX 5800 arrays.
Solution Architectural Overview Hardware resources Table 6 lists the hardware used in this solution. Table 6. Solution hardware Hardware Configuration Notes Servers for virtual desktops CPU: Add CPU and RAM as needed for the VMware vShield Endpoint and Avamar components. Refer to the vendor documentation for specific details concerning vShield Endpoint and Avamar resource requirements.
Solution Architectural Overview Hardware Configuration Notes Brocade FC-Block storage network infrastructure Two Brocade Connectrix-B Fibre Channel Switches (minimum switching capability): Redundant LAN/SAN configuration Four 4/8 Gb FC ports, or four 10 Gb CEE ports, or four 10 Gb Ethernet ports for VNX backend Two 8/16 Gb FC ports per storage processor (block only) Note: To implement FCoE or iSCSI Block storage network, use Brocade VDX 6740 Ethernet Fabric switches (10GbE only).
Solution Architectural Overview Hardware Configuration Notes For 500 virtual desktops Optional for infrastructure storage 5 x 300 GB, 15k rpm 3.5-inch SAS disks For 1,000 virtual desktops 5 x 300 GB, 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops 10 x 300 GB, 15k rpm 3.5-inch SAS disks For 500 virtual desktops 5 x 300 GB, 15k rpm 3.5-inch SAS disks For 1,000 virtual desktops Optional for vCenter Operations Manager for View 5 x 300 GB, 15 k rpm 3.
Solution Architectural Overview Software resources Table 7 lists the software used in this solution. Table 7. Solution software Software Configuration VNX5400/5600 (shared storage, file systems) VNX OE for file Release 8.0 VNX OE for block Release 33 (05.33) EMC VSI for VMware vSphere: Unified Storage Management VSI 5.6 EMC VSI for VMware vSphere: Storage Viewer VSI 5.6 VMware Horizon View Desktop Virtualization VMware Horizon View Manager Server Version 5.
Solution Architectural Overview Software Configuration Virtual Desktops Note Aside from the base operating system, this software was used for solution validation and is not required Base operating system Microsoft Windows 7 Enterprise (32-bit) SP1 Microsoft Office Office Enterprise 2007 Version 12 Internet Explorer 8.0.7601.17514 Adobe Reader X (10.1.3) VMware vShield Endpoint (component of VMware Tools) 9.0.5 build-1065307 Adobe Flash Player 11 Bullzip PDF Printer 7.2.0.1304 FreeMind 0.8.
Solution Architectural Overview Table 8 represents a sample server configuration required to support specific desktop solutions. Table 8. Sample server configuration Storage Configuration Number of desktops 500 1000 2000 Number of servers 8 16 32 Processor type Intel Nehalem 2.
Solution Architectural Overview Two switch deployment to support redundancy Redundant power supplies Scalable port density for a minimum of forty 1 GbE or eight 10 GbE ports (for 500 virtual desktops), two 1 GbE and sixteen 10 GbE ports (for 1,000 virtual desktops) or two 1 GbE and thirty-two 10 GbE ports (for 2,000 virtual desktops), distributed for high availability.
Solution Architectural Overview Server configuration guidelines Overview When designing and ordering the compute/server layer of the VSPEX solution described below, several factors might alter the final purchase. From a virtualization perspective, if a system’s workload is well understood, features like Memory Ballooning and Transparent Page Sharing can reduce the aggregate memory requirement.
Solution Architectural Overview vSphere memory virtualization for VSPEX VMware vSphere has a number of advanced features that help to maximize performance and overall use of resources. This section describes some of the important features that help to manage memory and considerations for using them in the environment. In general, you can consider virtual machines on a single hypervisor consuming memory as a pool of resources: Figure 19.
Solution Architectural Overview the server is being actively used, vSphere might resort to swapping portions of a virtual machine's memory. Non-Uniform Memory Access (NUMA) vSphere uses a NUMA load-balancer to assign a home node to a virtual machine. Memory access is local and provides the best performance possible because memory for the virtual machine is allocated from the home node. Applications that do not directly support NUMA benefit from this feature.
Solution Architectural Overview Brocade Network configuration guidelines Overview This section provides guidelines for setting up a redundant, highly available storage network configuration. The guidelines outlines compute access to existing infrastructure, management network and Brocade storage network for compute to EMC unified storage. Administrators use the Management Network as a dedicated way to access the management connections on the storage array, network switches, and hosts.
Solution Architectural Overview Notes: The solution may use 1 Gb network infrastructure as long as the underlying requirements around bandwidth and redundancy are fulfilled. This table assumes that the VSPEX implementation is using rack mounted servers; for implementations based on blade servers, ensure that similar bandwidth and high availability capabilities are available. Enable jumbo frames (for iSCSI and NFS) Brocade VDX Series switches support the transport of jumbo frames.
Solution Architectural Overview All flows with the same hash traverse the same link, regardless of the total number of links in a LAG. This might result in some links within a LAG, such as those carrying flows to a storage target, being over utilized and packets being dropped, while other links in the LAG remain underutilized. Instead of LAG-based switch interconnects, Brocade VCS Ethernet fabrics automatically form ISL trunks when multiple connections are added between two Brocade VDX® switches.
Solution Architectural Overview Figure 20. Required Brocade VDX network Note: The diagram demonstrates the network connectivity requirements for a VNX array using 10 GbE network connections. A similar topology should be created when using 1 GbE network connections. The client access network is for users of the system, or clients, to communicate with the infrastructure. The Storage Network is used for communication between the compute layer and the storage layer.
Solution Architectural Overview Zoning (FC Block Zoning is mechanism used to specify the devices in the fabric that should Storage Network be allowed to communicate with each other for storage network traffic only) between host & storage (Block based only). Zoning is based on either port World Wide Name (pWWN) or Domain, Port (D, P). (See the Secure SAN Zoning Best Practices white paper in Appendix C for details.
Solution Architectural Overview Storage configuration guidelines Overview vSphere allows more than one method of using storage when hosting virtual machines. The solutions described in Table 11 were tested using NFS or FC, and the storage layout described adheres to all current best practices. An educated customer or architect can make modifications based on their understanding of the system’s usage and load if required. Table 11.
Solution Architectural Overview Hardware Configuration Notes For 500 virtual desktops Optional for infrastructure storage 5 x 300 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops 5 x 300 GB 15 k rpm 3.5-inch SAS disks For 2,000 virtual desktops 10 x 300 GB 15 k rpm 3.5-inch SAS disks For 500 virtual desktops 5 x 300 GB 15 k rpm 3.5-inch SAS disks For 1,000 virtual desktops 5 x 300 GB 15 k rpm 3.
Solution Architectural Overview Figure 22. VMware virtual disk types VMFS VMFS is a cluster file system that provides storage virtualization optimized for virtual machines. It can be deployed over any SCSI-based local or network storage. Raw device mapping VMware also provides a mechanism called raw device mapping (RDM), which uses a Fibre Channel or iSCSI protocol and allows a virtual machine to have direct access to a volume on the physical storage.
Solution Architectural Overview Building block for 500 virtual desktops The first building block can contains up to 500 virtual desktops with ten SAS drives in a FAST Cache enabled storage pool, as shown in Figure 23. Figure 23. Storage layout building block for 500 virtual desktops This is the smallest building block qualified for the VSPEX architecture. Building block for 1,000 virtual desktops The second building block can contain up to 1,000 virtual desktops.
Solution Architectural Overview Table 12. Number of disks required for different number of virtual desktops VSPEX end user computing validated maximums Virtual desktops Flash drives (FAST Cache) SAS drives 500 2 10 1,000 2 15 2,000 4 30 VSPEX end user computing configurations are validated on the VNX5400 and VNX5600 platforms. Each platform has different capabilities in terms of processors, memory, and disks.
Solution Architectural Overview For block storage, 8 LUNs of 369 GB each and 2 LUNs of 50 GB each are provisioned from the pool to present to the vSphere servers as 10 VMFS datastores. Note: Two 50 GB datastores are used to save replica disks. Two flash drives (shown as 0_0_4 to 0_0_5) are used for EMC VNX FAST Cache. There are no user-configurable LUNs on these drives. Disks shown as 0_0_8 to 0_0_9 and 1_0_0 to 1_0_4 are unused and were not used for testing this solution.
Solution Architectural Overview Five SAS disks (shown as 1_1_0 to 1_1_4) in the RAID 5 Storage Pool 2 are used to store the infrastructure virtual machines. A 1.0 TB LUN or NFS file system is provisioned from the pool to present to the vSphere servers as a VMFS or NFS datastore. Five SAS disks (shown as 1_1_5 to 1_1_9) in the RAID 5 Storage Pool 3 are used to store the vCenter Operations Manager for View virtual machines and databases. A 1.
Solution Architectural Overview Figure 27. Core storage layout for 2,000 virtual desktops using VNX5600 Core storage layout overview The following core configuration is used in the solution: Four SAS disks (shown as 0_0_0 to 0_0_3) are used for the VNX OE. Disks shown as 0_0_6 and 0_0_7 are hot spares. These disks are marked as hot spare in the storage layout diagram. Fifteen SAS disks (shown as 0_0_10 to 0_0_14 and 1_0_5 to 1_0_14) in the RAID 5 Storage Pool 0 are used to store virtual desktops.
Solution Architectural Overview Optional user data storage layout In solution validation testing, storage space for user data was allocated on the VNX array as shown in Figure 28. This storage is in addition to the core storage shown above. If storage for user data exists elsewhere in the production environment, this storage is not required. Figure 28.
Solution Architectural Overview system is provisioned from the pool to present to the vSphere servers as a VMFS or NFS datastore. Thirty-two NL-SAS disks (shown as 0_0_8, 1_0_3, 1_1_0 to 1_1_14, and 0_2_0 to 0_2_14) in the RAID 6 Storage Pool 1 are used to store user data and profiles. FAST Cache is enabled for the entire pool. Ten LUNs of 3 TB each are provisioned from the pool to provide the storage required to create four CIFS file systems.
Solution Architectural Overview Compute layer While the choice of servers to implement in the compute layer is flexible, it is best to use the enterprise class servers designed for data centers. This type of server has redundant power supplies, as shown in Figure 30. You should connect them to separate Power Distribution Units (PDUs) in accordance with your server vendor’s best practices. Figure 30.
Solution Architectural Overview Figure 31. Brocade Network layer High-Availability (VNX) – block storage network variant Figure 32. Brocade Network layer High-Availability (VNX) - file storage network variant By ensuring that there are no single points of failure in the network layer you can ensure that the compute layer is able to access storage and communicate with users even if a component fails.
Solution Architectural Overview Figure 33. VNX series high availability EMC storage arrays are designed to be highly available by default. Use the installation guides to ensure that there are no single unit failures that result in data loss or unavailability. VMware Horizon View 5.
Solution Architectural Overview Validation test profile Profile characteristics Table 13 shows the solution stacks that we validated with the environment profile. Table 13.
Solution Architectural Overview Antivirus and antimalware platform profile Platform characteristics Table 14 shows how the solution was sized based on the following vShield Endpoint platform requirements. Table 14. Platform characteristics Platform Component Technical Information VMware vShield Manager appliance Manages the vShield Endpoint service installed on each vSphere host 1 vCPU, 3 GB RAM, and 8 GB hard disk space VMware vShield Endpoint service Installed on each desktop vSphere host.
Solution Architectural Overview vCenter Operations Manager for View platform profile desktops Platform characteristics Table 15 shows how this solution stack was sized based on the following vCenter Operations Manager for View platform requirements. Table 15. Platform characteristics Platform Component Technical Information VMware vCenter Operations Manager vApp The vApp consists of a user interface (UI) virtual appliance and an Analytics virtual appliance.
Solution Architectural Overview Platform Component Technical Information VMware vCenter Operations Manager vApp The vApp consists of a user interface (UI) virtual appliance and an Analytics virtual appliance. For 500 virtual desktops UI appliance requirements: 2 vCPU, 5 GB RAM, and 50 GB hard disk space Analytics appliance requirements: 2 vCPU, 7 GB RAM, and 300 GB hard disk space For 1,000 virtual desktops UI appliance requirements: 2 vCPU, 7 GB RAM, and 75 GB hard disk space.
Solution Architectural Overview Backup and recovery configuration guidelines See Design and Implementation Guide: EMC Backup and Recovery Options for VSPEX End User Computing for VMware Horizon View on available on EMC Online Support. Sizing guidelines The following sections define the reference workload used to size and implement the VSPEX architectures discussed in this document, and provide guidance on how to correlate those reference workloads to actual customer workloads.
Solution Architectural Overview Table 16. Virtual desktop characteristics Characteristic Value Virtual desktop operating system Microsoft Windows 7 Enterprise Edition (32-bit) SP1 Virtual processors per virtual desktop 1 RAM per virtual desktop 2 GB Available storage capacity per virtual desktop* 3 GB (vmdk and vswap) Average IOPS per virtual desktop at steady state 10 * This available storage capacity is calculated based on drives used in this solution.
Solution Architectural Overview Implementing the reference architectures Overview Resource types The solutions architectures require a set of hardware to be available for the CPU, memory, network, and storage needs of the system. In the solutions architectures, these are presented as general requirements that are independent of any particular implementation. This section describes some considerations for implementing the requirements.
Solution Architectural Overview to some degree. The administrator has the responsibility to monitor proactively the oversubscription rate such that it does not shift the bottleneck away from the server and become a burden to the storage subsystem. If VMware vSphere runs out of memory for the guest operating systems, paging will begin to take place, resulting in extra I/O activity going to the vswap files.
Solution Architectural Overview Regardless of the network traffic requirements, always have at least two physical network connections that are shared for a logical network to ensure a single link failure does not affect the availability of the system. The network should be designed so that the aggregate bandwidth, in the event of a failure, is sufficient to accommodate the full workload. Storage resources The solutions contain layouts for the disks used in the validation of the system.
Solution Architectural Overview Quick assessment Overview An assessment of the customer environment helps to ensure that you implement the correct VSPEX solution. This section provides an easy-to-use worksheet to simplify the sizing calculations, and help assess the customer environment. First, summarize the user types planned for migration into the VSPEX EndUser Computing environment.
Solution Architectural Overview 104 Storage performance requirements The storage performance requirements for desktops are usually the least understood aspect of performance. The reference virtual desktop uses a workload generated by an industry-recognized tool to execute a wide variety of office productivity applications that should be representative of the majority of virtual desktop implementations.
Solution Architectural Overview Determining equivalent reference virtual desktops With all of the resources defined, determine an appropriate value for the Equivalent Reference virtual desktops line by using the relationships in Table 18. Round all values up to the closest whole number. Table 18.
Solution Architectural Overview Table 20.
Solution Architectural Overview workload separation, purchase additional disk drives for each group that needs workload isolation and add them to a dedicated pool. It is not appropriate to reduce the size of the main storage resource pool in order to support isolation or to reduce the capability of the pool without additional guidance beyond this paper.
Solution Architectural Overview Table 22. Blank customer worksheet User type CPU (virtual CPUs) Memory (GB) IOPS Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Resource requirements Equivalent reference virtual desktops Total 108 VMware Horizon View 5.
VSPEX Configuration Guidelines Chapter 5 VSPEX Configuration Guidelines This chapter presents the following topics: Configuration overview .......................................................................... 110 Pre-deployment tasks .............................................................................. 111 Customer configuration data ................................................................ 114 Prepare, connect, and configure Brocade network switches .........
VSPEX Configuration Guidelines Configuration overview Deployment process The deployment process is divided into the stages shown in Table 23. Upon completion of the deployment, the VSPEX infrastructure will be ready for integration with the existing customer network and server infrastructure. Table 23 lists the main stages in the solution deployment process. The table also includes references to chapters that provide relevant procedures. Table 23.
VSPEX Configuration Guidelines Pre-deployment tasks Overview Pre-deployment tasks, as shown in Table 25, include procedures that do not directly relate to environment installation and configuration, but you will need the results from these tasks at the time of installation. Examples of pre-deployment tasks are collection of host names, IP addresses, VLAN IDs, license keys, installation media, and so on. You should perform these tasks before the customer visit to reduce the amount of time required on site.
VSPEX Configuration Guidelines Deployment prerequisites Complete the VNX Block Configuration Worksheet for FC variant or VNX File and Unified Worksheet for NFS variant, available on EMC Online Support, to provide the most comprehensive array-specific information. Table 25 itemizes the hardware, software, and license requirements to configure the solution. Visit EMC Online Support for more information on these prerequisites. Table 25.
VSPEX Configuration Guidelines Requirement Description Microsoft Windows Server 2008 R2 installation media (suggested OS for VMware vCenter and VMware View Connection Server) Microsoft Windows 7 SP1 installation media Microsoft SQL Server 2008 or later installation media Note: This requirement might be covered in the existing infrastructure. Software–FC variant only EMC PowerPath® Viewer Software–NFS variant only EMC vStorage API for Array Integration plug-in Licenses VMware vCenter 5.
VSPEX Configuration Guidelines Customer configuration data To reduce the onsite time, you should assemble information such as IP addresses and hostnames as part of the planning process. Appendix B provides a table enabling you to maintain a record of relevant information. You can expand or shorten this form as required while adding, modifying, and recording your deployment progress.
VSPEX Configuration Guidelines Prepare Brocade The Brocade network switches deployed with the VSPEX solution provide Network the redundant links for each ESXi host, the storage array, the switch Infrastructure interconnect ports, and the switch uplink ports. This Brocade storage network configuration provides both scalable bandwidth performance and redundancy.
VSPEX Configuration Guidelines Note: Ensure there are adequate switch ports between the file based attached storage array & ESXi hosts, and ports to existing customer infrastructure. Note: Use a minimum of two VLANs for: - Storage networking (NFS) and vMotion. - Virtual machine networking and ESXi management (These are customer- facing networks. Separate them if required.
VSPEX Configuration Guidelines Figure 35. Sample network architecture – Block storage VMware Horizon View 5.
VSPEX Configuration Guidelines Configure VLANs Ensure adequate switch ports for the storage array and vSphere hosts that are configured with a minimum of three VLANs for: Management traffic NFS networking (private network). VMware vMotion (vMotion) (private network). Complete network cabling Ensure the following: Connect Brocade switch ports to all servers, storage arrays, interswitch links ( ISLs)and uplinks.
VSPEX Configuration Guidelines Configure Brocade VDX 6740 Switch (File Storage) This section describes Brocade VDX switch configuration procedures for file storage provisioning with VMware. The Brocade VDX switches provide for infrastructure connectivity between ESXi servers, existing customer network, and NFS attached VNX storage as described in the following sections for this VSPEX solution.
VSPEX Configuration Guidelines All switches in an Ethernet fabric can be managed as if they were a single logical chassis. To the rest of the network, the fabric looks no different from any other Layer 2 switch (Logical Chassis feature). Brocade VDX switches are available in both port side exhaust and port side intake configurations. Depending upon the hot-aisle, cold-aisle considerations you should choose the appropriate airflow models for your deployment.
VSPEX Configuration Guidelines Step 1: Verify and Apply Brocade VDX NOS Licenses Before starting the switch configurations, make sure you have the required licenses for the VDX 6740 Switches available. With NOS version 4.1.0 or later Brocade VCS Fabric license is built into the code so you will only require port upgrade licenses depending on the number of port density required in the setup.
VSPEX Configuration Guidelines As noted in the switch output, please note that you may have to enable ports for the licenses to take effect. You can do that by doing no shut on the interfaces you are using. The 40GbE ports can also be used in breakout mode as four 10GbE ports. For details, please refer to the Network OS Administration Guide, v4.1.0 to configure them. C. Displaying Licenses on the switches You can display installed licenses with the show license command.
VSPEX Configuration Guidelines In Privileged EXEC mode, enter the vcs command with options to set the VCS ID, RBridge ID and enable logical chassis mode for the switch. After you execute the below command you are asked if you want to apply the default configuration and reboot the switch; answer ‘Yes’. sw0# vcs vcsid 1 rbridge-id 21 logical-chassis enable This operation will perform a VCS cluster mode transition for this local node with new parameter settings.
VSPEX Configuration Guidelines BRCD6740-21# show vcs Config Mode : Distributed VCS Mode : Logical Chassis VCS ID : 1 VCS GUID : 34f262b4-e64f-4a18-a986-a767d389803e Total Number of Nodes : 2 Rbridge-Id WWN Management IP VCS Status Fabric Status HostName -----------------------------------------------------------21 >10:00:00:27:F8:BB:94:18* 10.254.5.44 Online Online BRCD6740-21 22 10:00:00:27:F8:BB:7E:85 10.254.5.43 Online Online BRCD6740-22 …. ….
VSPEX Configuration Guidelines Figure 36. Port types Fabric ISLs and Trunks Brocade ISLs connect VDX switches in VCS mode. All ISL ports connected to the same neighbor VDX switch attempt to form a trunk. Trunk formation requires that all ports between the switches are set to the same speed and are part of the same port group.
VSPEX Configuration Guidelines Shown below are the port groups for the VDX 6740 and 6740T platforms. 1 Trunk Group 1 - 1/10 GbE SFP ports 1-16 4 Trunk group 4 - 1/10 GbE SFP ports 41-48 2 Trunk Group 2 - 1/10 GbE SFP ports 17-32 5 Trunk Group 3A - 40 GbE QSFP ports 49-50 3 Trunk Group 3 - 1/10 GbE SFP ports 33-40 6 Trunk Group 4A - 40 GbE QSFP ports 51-52 Figure 37.
VSPEX Configuration Guidelines BRCD6740-21# show fabric isl Rbridge-id: 21 #ISLs: 2 Src Src Nbr Nbr Nbr-WWN BW Trunk Nbr-Name Index Interface Index Interface -----------------------------------------------------------------0 Fo 21/0/49 0 Fo 22/0/49 10:00:00:27:F8:BB:7E:85 40G Yes "BRCD6740-22" 2 Fo 21/0/51 2 Fo 22/0/51 10:00:00:27:F8:BB:7E:85 40G Yes "BRCD6740-22" BRCD6740-21# show fabric trunk Rbridge-id: 21 Trunk Src Source Nbr Nbr Group Index Interface Index Interface Nbr-WWN ----------------------------
VSPEX Configuration Guidelines Configuring Port-channel 44 between Host and VDX switches Configuration on RB21 BRCD6740-RB21(config)# interface port-channel 44 BRCD6740-RB21(config-Port-channel-44)# mtu 9216 BRCD6740-RB21(config-Port-channel-44)# no shutdown BRCD6740-RB21(config-Port-channel-44)# interface TenGigabitEthernet 21/0/21 BRCD6740-RB21(conf-if-gi-21/0/21)# channel-group 44 mode on Note: The mode “on” configures the interface as a static vLAG.
VSPEX Configuration Guidelines Step 6: vCenter Integration for AMPP Brocade AMPP (Automatic Migration of Port Profiles) technology enhances network-side virtual machine migration by allowing VM migration across physical switches, switch ports, and collision domains. In traditional networks, port-migration tasks usually require manual configuration changes as VM migration across physical server and switches can result in non-symmetrical network policies.
VSPEX Configuration Guidelines Figure 40. VM Internal Network Properties VDX 6740 switches support VMware vCenter integration, which provides AMPP automation. NOS v4.1.0 supports vCenter 5.0. Automatically creates AMPP port-profiles from VM port groups. Automatically creates VLANs. Automatically creates association of VMs to port groups. Automatically configures port-profile modes on ports.
VSPEX Configuration Guidelines 4. VCS fabric will automatically configure corresponding objects including: o Port-profiles and VLAN creation o MAC address association to port-profiles o Port, LAGs, and vLAGs automatically put into profile mode based on ESX host connectivity. 5. VCS fabric is ready for VM movements.
VSPEX Configuration Guidelines Note: By default, the vCenter server only accepts https connection requests. Verify vCenter Integration Status BRCD6740-RB21# show vnetwork vcenter status vCenter Start Elapsed (sec) Status -----------------------------------------------------------------production 2014-03-09 06:12:43 17 In progress In progress indicates discovery is taking place. Success will show when it is complete.
VSPEX Configuration Guidelines Vmpolicy - Displays the following network policies on the Brocade VDX switch: associated media access control (MAC) address, virtual machine, (dv) port group, and the associated port profile. vms - Displays discovered virtual machines (VMs). vss - Displays discovered standard virtual switches.
VSPEX Configuration Guidelines BRCD6740-RB21# show port-profile status associated Port-Profile PPID Activated Associated MAC Interface auto-dvPortGroup_4_0 4 Yes 0050.567e.98b0 None auto-dvPortGroup_vlag 5 Yes 0050.5678.eaed None auto-for_nfs 6 Yes 0050.5673.
VSPEX Configuration Guidelines BRCD6740-RB21# configure terminal BRCD6740-RB21(config)# interface TenGigabitEthernet 21/0/24 BRCD6740-RB21(conf-if-te-21/0/24)# description VNX-SPA-fxg-1-1 BRCD6740-RB21(conf-if-te-21/0/24)# channel-group 33 mode active type standard BRCD6740-RB21(conf-if-te-21/0/24)# lacp timeout long BRCD6740-RB21(conf-if-te-21/0/24)# no shutdown 3.
VSPEX Configuration Guidelines Step 8: Connecting the VCS Fabric to existing Infrastructure through Uplinks Brocade VDX 6740 switches can be uplinked to be accessible from customer’s existing network infrastructure. On VDX 6740 platforms, the user will need to use 40GbE or 10GbE uplinks. The uplink should be configured to match whether or not the customer’s network is using tagged or untagged traffic.
VSPEX Configuration Guidelines BRCD6740-RB21(config)# interface port-channel 4 BRCD6740-RB21(config-Port-channel-4)# switchport BRCD6740-RB21(config-Port-channel-4)# switchport mode trunk BRCD6740-RB21(config-Port-channel-4)# switchport trunk allowed vlan all BRCD6740-RB21(config-Port-channel-4)# no shutdown 2. Use the channel-group command to configure interfaces as members of a Port-Channel 4 to the infrastructure switches that interface to the core.
VSPEX Configuration Guidelines Step 9 Configure MTU and Jumbo Frames (for NFS) Brocade VDX Series switches support the transport of jumbo frames. This solution recommends an MTU setting at 9216 (Jumbo frames) for efficient NAS storage and migration traffic. Jumbo frames are enabled by default on the Brocade ISL trunks. However, to accommodate end-to-end jumbo frame support on the network for the edge systems, this feature can be enabled under the vLAG interface.
VSPEX Configuration Guidelines Note: As this command was created primarily to benefit Network Attached Storage devices (NAS). The commands used with this feature use the term tures created primarily too strict requirement that these nodes be actual NAS devices, as Auto QoS will prioritize the traffic for any set of specified IP addresses. There are four steps to enabling and configuring Auto QoS for NAS: 1. Enable Auto QoS. 2. Set the Auto QoS CoS value. 3. Set the Auto QoS DSCP value. 4.
VSPEX Configuration Guidelines Configure Brocade 6510 Switch storage network (Block Storage) Listed below is the procedure required to deploy the Brocade 6510 Fibre Channel (FC) switches in the EMC® VSPEX™ with Brocade Networking Solutions for END-USER COMPUTING VMware Horizon View 5.3 and VMware vSphere for up to 2,000 Virtual Desktops with block storage network. The Brocade 6510 FC switches provide for infrastructure connectivity between servers and attached VNX storage of the VSPEX solution.
VSPEX Configuration Guidelines All Brocade Fibre Channel Switches have factory defaults listed in Table 27. Table 27. Brocade switch default settings Setting Factory default Factory Default MGMT IP: 10.77.77.77 Factory Default Subnet: 255.0.0.0 Factory Default Gateway: 0.0.0.
VSPEX Configuration Guidelines Step 1: Initial Switch Configuration Configure Hyper Terminal 1. Connect the serial cable to the serial port on the switch and to an RS-232 serial port on the workstation. 2. Open a terminal emulator application (such as HyperTerminal on a PC) and configure the application as follows Table 29.
VSPEX Configuration Guidelines SW6510:admin> ipaddrset Ethernet IP Address [10.77.77.77]:10.18.226.172 Ethernet Subnetmask [255.255.255.0]:255.255.255.0 Gateway IP Address [0.0.0.0]:10.18.226.1 DHCP [Off]: off If you are going to use an IPv6 address, enter the network information in semicolon-separated notation as a standalone command. SW6510:admin> ipaddrset -ipv6 --add 1080::8:800:200C:417A/64 IP address is being changed...
VSPEX Configuration Guidelines Since Insistent Domain ID Mode is enabled, please ensure that switches in fabric do not have duplicate domain IDs configured, otherwise this may cause switch to segment, if Insistent domain ID is not obtained when fabric re-configures. BRCD-FC-6510:FID128:admin> switchenable Set Switch Name SW6510:FID128:admin> switchname BRCD-FC-6510 Committing configuration... Done.
VSPEX Configuration Guidelines Time Zone You can set the time zone for the switch by name. You can also set country, city or time zone parameters. BRCD-FC-6510:FID128:admin> tstimezone --interactive Please identify a location so that time zone rules can be set correctly. Please select a continent or ocean.
VSPEX Configuration Guidelines Please select one of the following time zone regions.
VSPEX Configuration Guidelines Setting the date 1. Log into the switch using the default password, which is password. 2. Enter the date command, using the following syntax (the double quotation marks are required): Syntax: date "mmddHHMMyy" The values are: mm is the month; valid values are 01 through 12. dd is the date; valid values are 01 through 31. HH is the hour; valid values are 00 through 23. MM is minutes; valid values are 00 through 59.
VSPEX Configuration Guidelines Verify Switch Component Status BRCD-FC-6510:FID128:admin> switchstatusshow Switch Health Report Report time: 08/14/2013 09:19:56 PM Switch Name: BRCD-FC-6510 IP address: 10.18.226.
VSPEX Configuration Guidelines Step 3: FC Zoning Configuration Zone Objects A zone object is any device in a zone, such as: Physical port number or port index on the switch Node World Wide Name (N-WWN) Port World Wide Name (P-WWN) Zone Schemes You can establish a zone by identifying a zone objects by using one or more of the following zoning schemes Domain,Index - All members are specified by Domain ID and Port Number or Domain Index number pair or aliases.
VSPEX Configuration Guidelines Permanent Port Name: 50:06:01:6c:36:60:07:c3 Port Index: 10 Share Area: No Device Shared in Other AD: No Redirect: No Partial: No The Local Name Server has 2 entries } Create Alias SW6510:FID128:admin> alicreate error: Usage: alicreate "arg1", "arg2" SW6510:FID128:admin> alicreate "ESX_Host_HBA1_P0","10:00:00:05:33:64:d6:35" SW6510:FID128:admin> alicreate "VNX_SPA_P0","50:06:01:60:b6:60:07:c3" Create Zone SW6510:FID128:admin> zonecreate error: Usage: zonecreate "arg1", "arg2
VSPEX Configuration Guidelines Verify Zone Configuration SW6510:FID128:admin> cfgshow Defined configuration: cfg: vspex ESX_Host_A zone: ESX_Host_A ESX_Host_HBA1_P0; VNX_SPA_P0 alias: ESX_Host_HBA1_P0 10:00:00:05:33:64:d6:35 alias: VNX_SPA_P0 50:06:01:60:b6:60:07:c3 Effective configuration: cfg: vspex zone: ESX_Host_A 10:00:00:05:33:64:d6:35 50:06:01:60:b6:60:07:c3 SW6510:FID128:admin> cfgactvshow Effective configuration: cfg: vspex zone: ESX_Host_A 10:00:00:05:33:64:d6:35 50:06:01:60:b6:60:07:c3 Follow t
VSPEX Configuration Guidelines Prepare and configure the storage array VNX configuration This section describes how to configure the VNX storage array. In this solution, the VNX series provides NFS or VMware Virtual Machine File System (VMFS) data storage for VMware hosts. Table 30 shows the tasks for the storage configuration. Table 30.
VSPEX Configuration Guidelines Set up the initial VNX configuration After completing the initial VNX setup, configure key information about the existing environment so that the storage array can communicate.
VSPEX Configuration Guidelines desktops). Configure each LUN from the pool to present to the vSphere servers as four VMFS datastores. a. Go to Storage > LUNs. b. In the dialog box, click Create. c. Select the Pool created in Step 1. You will provision LUNs after this operation. 3. Configure a storage group to allow vSphere servers to access the newly created LUNs. a. Go to Hosts > Storage Groups. b. Create a new storage group. c. Select LUNs and ESXi hosts to add to the storage group.
VSPEX Configuration Guidelines g. Choose the 10 LUNs you just created. They appear in the Selected LUNs pane. h. Select A new storage pool for file is ready or manually rescan. i. Click Storage > Storage Pool for File > Rescan Storage System to create multiple file systems. Note: EMC Performance Engineering best practice recommends that you create approximately 1 LUN for every 4 drives in the storage pool and that you create LUNs in even multiples of 10.
VSPEX Configuration Guidelines Figure 43. Set nthread parameter Fast Cache configuration To configure FAST Cache on the storage pool(s) for this solution complete the following steps. 1. Configure flash drives as FAST Cache: a. To create FAST Cache, from the Unisphere dashboard, click Properties or in the left pane of the Unisphere window, select Manage Cache. b. In the Storage System Properties dialog box, shown in Figure 44, select FAST Cache to view FAST Cache information. Figure 44.
VSPEX Configuration Guidelines 2. Click Create to open the Create FAST Cache dialog box, shown in Figure 45. The RAID Type field is displayed as RAID 1 when the FAST Cache has been created. The number of flash drives can also be chosen in the screen. The bottom portion of the screen shows the flash drives that will be used for creating FAST Cache. You can choose the drives manually by selecting the Manual option. 3.
VSPEX Configuration Guidelines Figure 46. Create Storage Pool dialog box Advanced tab If the storage pool has already been created, you can use the Advanced tab in the Storage Pool Properties dialog box to configure FAST Cache as shown in Figure 47. Figure 47. Storage Pool Properties dialog box Advanced tab Note: The FAST Cache feature on the VNX series array does not cause an instantaneous performance improvement.
VSPEX Configuration Guidelines Figure 26 on page 86 depicts the target user data storage layout for 1,000 virtual desktops. Figure 28 on page 89 depicts the target user data storage layout for 2,000 virtual desktops. 2. Provision ten 1 TB (for 500 virtual desktops), 1.5 TB (for 1,000 virtual desktops), or 3 TB (for 2,000 virtual desktops) LUNs each from the pool to present to Data Mover as dvols that belong to a systemdefined NAS pool. 3.
VSPEX Configuration Guidelines Figure 49. Manage Auto-Tiering Window From this status window, you can control the Data Relocation Rate. The default rate is set to Medium to avoid significantly affecting host I/O. Note: FASTVP is a completely automated tool and you can schedule relocations to occur automatically. EMC recommends that relocations be scheduled during off-hours to minimize any potential performance impact. Configure FAST VP at the LUN level.
VSPEX Configuration Guidelines Figure 50. LUN Properties window The Tier Details section displays the current distribution of slices within the LUN. Tiering policy can be selected at the LUN level from the Tiering Policy list.
VSPEX Configuration Guidelines Install and configure vSphere hosts Overview This section provides information about the installation and configuration of vSphere hosts and infrastructure servers required to support the architecture. Table 31 describes the tasks to be completed. Table 31. Tasks for server installation Install vSphere Task Description Reference Install vSphere Install the vSphere hypervisor on the physical servers deployed for the solution.
VSPEX Configuration Guidelines Configure vSphere NetworkingConn iknfnevjrevnervlk jnvrelgv During the installation of VMware vSphere, a standard virtual switch (vSwitch) is created. By default, vSphere chooses only one physical NIC as a vSwitch uplink. To maintain redundancy and bandwidth requirements, configure an additional NIC, either by using the vSphere console or by connecting to the vSphere host from the vSphere Client.
VSPEX Configuration Guidelines To enable jumbo frames on the VNX: 1. Navigate to Unisphere >Settings > Network > Settings for File. 2. Select the appropriate network interface under the Interfaces tab. 3. Select Properties. 4. Set the MTU size to 9,000. 5. Click OK to apply the changes. Jumbo frames might also need to be enabled on each network switch. Consult your switch configuration guide for instructions.
VSPEX Configuration Guidelines VMware vCenter Server Deployment. Plan virtual machine memory allocations Server capacity is required for two purposes in the solution: To support the new virtualized desktop infrastructure To support the required infrastructure services such as authentication/authorization, DNS, and database For information on minimum infrastructure services hosting requirements, refer to Table 6.
VSPEX Configuration Guidelines Virtual machine memory concepts Figure 51 shows the memory settings parameters in the virtual machine, including: Configured memory—Physical memory allocated to the virtual machine at the time of creation Reserved memory—Memory that is guaranteed to the virtual machine Touched memory—Memory that is active or in use by the virtual machine Swappable—Memory that can be de-allocated from the virtual machine if the host is under memory pressure from other virtual machines
VSPEX Configuration Guidelines Install and configure SQL Server database Overview This section and Table 32 describe how to set up and configure a Microsoft SQL Server database for the solution. When the steps in this section have been completed, Microsoft SQL Server will be on a virtual machine with the databases required by VMware vCenter, VMware Update Manager, VMware Horizon View, and VMware View Composer configured for use. Table 32.
VSPEX Configuration Guidelines Task Description Reference Configure the VMware Horizon View and View Composer database permissions Configure the database server with appropriate permissions for the VMware Horizon View and VMware Horizon View Composer databases. VMware Horizon View 5.3 Installation Configure VMware vCenter database permissions Configure the database server with appropriate permissions for the VMware vCenter.
VSPEX Configuration Guidelines Note: For high availability, an SQL Server can be installed in a Microsoft Failover Clustering or on a virtual machine protected by VMHA clustering. Do not combine these technologies. Configure the To use VMware vCenter in this solution, create a database for the service database for to use. The requirements and steps to configure the vCenter Server VMware vCenter database correctly are covered in Preparing vCenter Server Databases.
VSPEX Configuration Guidelines Configure the VMware Horizon View and View Composer database permissions 170 At this point, your database administrator must create user accounts that will be used for the View Manager and View Composer databases and assign the appropriate permissions. It is a best practice to create individual login accounts for each service accessing a database on SQL Server. Consult your database administrator for your organization’s policy. VMware Horizon View 5.
VSPEX Configuration Guidelines VMware vCenter Server Deployment Overview This section provides information on how to configure VMware vCenter. Table 33 describes the tasks to be completed. Table 33. Tasks for vCenter configuration Task Description Reference Create the vCenter host virtual machine Create a virtual machine for the VMware vCenter Server. vSphere Virtual Machine Administration Install vCenter guest OS Install Windows Server 2008 R2 Standard Edition on the vCenter host virtual machine.
VSPEX Configuration Guidelines Task Description Reference Install the vCenter Update Manager plugin Install the vCenter Update Manager plug-in on the administration console. Installing and Administering VMware vSphere Update Manager vStorage APIs for Array Integration (VAAI) Plug-in Using VMware Update Manager, deploy the vStorage APIs for Array Integration (VAAI) plug-in to all vSphere hosts. EMC VNX VAAI NFS Plug-in– Installation HOWTO video available on www.youtube.
VSPEX Configuration Guidelines Create the vCenter host virtual machine If the VMware vCenter Server is to be deployed as a virtual machine on a vSphere server installed as part of this solution, connect directly to an Infrastructure vSphere server using the vSphere Client. Create a virtual machine on the vSphere server with the guest OS configuration using the infrastructure server datastore presented from the storage array.
VSPEX Configuration Guidelines Deploy PowerPath/VE (FC variant) EMC PowerPath is host-based software that provides automated data path management and load-balancing capabilities for heterogeneous server, network, and storage deployed in physical and virtual environments. PowerPath uses multiple I/O data paths to share the workload, and automated load balancing to ensure the efficient use of data paths. The PowerPath/VE plug-in is installed using the vSphere Update Manager.
VSPEX Configuration Guidelines Table 34. Tasks for VMware Horizon View Connection Server setup Task Description Reference Create virtual machines for VMware View Connection Servers Create two virtual machines in vSphere Client. These virtual machines will be used as VMware View Connection Servers. VMware Horizon View 5.3 Installation Install guest OS for VMware View Connection Servers Install Windows Server 2008 R2 guest OS.
VSPEX Configuration Guidelines Install the VMware Horizon View Connection Server Install the View Connection Server software using the instructions from VMware Horizon View 5.3 Installation. Select Standard when prompted for the View Connection Server type. Configure the View Event Log Database connection Configure the VMware Horizon View event log database connection using the database server name, database name, and database log in credentials. Review the VMware Horizon View 5.
VSPEX Configuration Guidelines Complete the following steps to prepare the master virtual machine: 1. Using the VMware vSphere Web Client, create a virtual machine using the VMware version 9 hardware specification. You cannot create version 9 virtual machines with the software client; you must use the web client. 2. Install Windows 7 guest OS. 3. Install appropriate integration tools such as VMware Tools. 4.
VSPEX Configuration Guidelines Configure View PCoIP Group Policies You control View PCoIP protocol settings using Active Directory Group Policies that are applied to the OU containing the VMware View Connection Servers. The View Group Policy templates are located in the \Program Files\VMware\VMware View\Server\extras\GroupPolicyFiles directory on the View Connection Server. You should use the group policy template pcoip.
VSPEX Configuration Guidelines Set up VMware vShield Endpoint Overview This section provides information on how to set up and configure the components of vShield Endpoint. Table 35 describes the tasks to be completed. Table 35. Tasks required to install and configure vShield Endpoint Task Description Reference Verify desktop vShield Endpoint driver installation Verify that the vShield Endpoint driver component of VMware Tools has been installed on the virtual desktop master image.
VSPEX Configuration Guidelines Verify desktop The vShield Endpoint driver is a subcomponent of the VMware Tools vShield Endpoint software package that is installed on the virtual desktop master image. The driver installation driver is installed using one of two methods: Select the Complete option during VMware Tools installation. Select the Custom option during VMware Tools installation. From the VMware Device Drivers list, select VMCI Driver, and then select vShield Driver.
VSPEX Configuration Guidelines Set Up VMware vCenter Operations Manager for View Overview This section provides information on how to set up and configure VMware vCOps for View. Table 36 describes the tasks that must be completed. Table 36. Tasks required to install and configure vCOps Task Description Create vSphere IP Pool for vCOps Create an IP pool with two available IPs. Deploy vCOps vSphere Application Services (vApp) Deploy and configure the vCOps vApp.
VSPEX Configuration Guidelines Task Description Import the vCOps for View PAK file Import the vCenter Operations Manager for View Adapter PAK file using the vCOps main web interface. Verify vCOps for View functionality Verify functionality of vCOps for View using the virtual desktop master image. Reference View Integration Guide Create vSphere IP Pool for vCOps vCOps requires two IP addresses for use by the vCOps analytics and user interface (UI) virtual machines.
VSPEX Configuration Guidelines Create the virtual machine for the vCOps for View Adapter server The vCOps for View Adapter server is a Windows Server 2008 R2 computer that gathers information from several sources related to View performance. The server is a required component of the vCOps for View platform. The specifications for the server vary based on the number of desktops being monitored.
VSPEX Configuration Guidelines 184 VMware Horizon View 5.
Chapter 6 Validating the Solution This chapter presents the following topics: Overview 186 Post-install checklist.................................................................................. 187 Deploy and test a single virtual desktop .............................................. 187 Verify the redundancy of the solution components .......................... 187 Provision remaining virtual desktops ..................................................... 188 VMware Horizon View 5.
Validating the Solution Overview This chapter provides a list of items that you should review after configuring the solution. The goal of this chapter is to verify the configuration and functionality of specific aspects of the solution and ensure that the configuration supports core availability requirements. Table 37 describes the tasks to be completed. Table 37.
Validating the Solution Post-install checklist The following configuration items are critical to functionality of the solution, and should be verified prior to deployment into production. On each vSphere server used as part of this solution, verify that: The vSwitches hosting the client VLANs are configured with sufficient ports to accommodate the maximum number of virtual machines it can host.
Validating the Solution 3. From the control station $ prompt, execute the command server_cpu movername -reboot, where movername is the name of the Data Mover. 4. To verify that network redundancy features function as expected, disable each of the redundant switching infrastructures in turn. While each of the switching infrastructures is disabled, verify that all the components of the solution maintain connectivity to each other and to any existing client infrastructure as well. 5.
Validating the Solution Figure 52. View Composer Disks page 8. Check Select separate datastores for replica and OS disk. 9. Select the appropriate parent virtual machine, virtual machine snapshot, folder, vSphere hosts or clusters, vSphere resource pool, and linked clone and replica disk datastores. 10. Enable host caching for the desktop pool and specify cache regeneration blackout times. 11. Specify image customization options as required. 12.
Validating the Solution 190 VMware Horizon View 5.
Appendix A Bills of Materials This appendix presents the following topics: Bill of Materials for 500 virtual desktops ................................................. 192 Bill of Material for 1,000 virtual desktops ............................................... 194 Bill of Material for 2,000 virtual desktops ............................................... 196 VMware Horizon View 5.
Bills of Materials Bill of Materials for 500 virtual desktops Component VMware vSphere Servers Solution for 500 virtual machines CPU 1 x vCPU per virtual machine 8 x vCPUs per physical core 500 x vCPUs Minimum of 63 physical cores Memory 2 GB RAM per desktop 2 GB RAM reservation per vSphere host Network – FC option 2 x 4/8 GB FC HBAs per server Network – 1 Gb option 6 x 1 GbE NICs per server Note: To implement the VMware vSphere High Availability (HA) feature and to meet the listed minimum require
Bills of Materials Component EMC VNX series storage array Solution for 500 virtual machines Common EMC VNX5400 2 x Data Movers (active / standby) 15 x 300 GB 15 k rpm 3.5-inch SAS drives – Core Desktops 3 x 100 GB 3.5-inch flash drives – FAST Cache 9 x 2 TB 3.
Bills of Materials Bill of Material for 1,000 virtual desktops Solution for 1,000 Virtual Machines Component VMware vSphere Servers CPU 1 x vCPU per virtual machine 8 x vCPUs per physical core 1000 x vCPUs Minimum of 125 physical cores Memory 2 GB RAM per desktop Minimum of 2 TB RAM Network – FC option 2 x 4/8 GB FC HBAs per server Network – 1Gb option 6 x 1 GbE NICs per server Network – 10Gb option 3 x 10 GbE NICs per blade chassis Note: To implement the VMware vSphere High Availability (HA) f
Bills of Materials Solution for 1,000 Virtual Machines Component EMC NextGeneration Backup Avamar EMC VNX series storage array Common 1 x Gen4 utility node 1 x Gen4 3.9 TB spare node 3 x Gen4 3.9 TB storage nodes EMC VNX5400 2 x Data Movers (active / standby) 21 x 300 GB 15 k rpm 3.5-inch SAS drives – core desktops 3 x 100 GB 3.5-inch flash drives – FAST Cache 17 x 2 TB 3.
Bills of Materials Bill of Material for 2,000 virtual desktops Component VMware vSphere Servers Solution for 2,000 Virtual Machines CPU 1 x vCPU per virtual machine 8 x vCPUs per physical core 2,000 x vCPUs Minimum of 250 Physical Cores Memory 2 GB RAM per desktop Minimum of 4 TB RAM Network – FC option 2 x 4/8 GB FC HBAs per server Network – 1 Gb option 6 x 1 GbE NICs per server Network – 10 Gb option 3 x 10 GbE NICs per blade chassis Note: To implement VMware vSphere High Availability (HA) fun
Bills of Materials Component Solution for 2,000 Virtual Machines EMC NextGeneration Backup Avamar See Design and Implementation Guide: EMC Backup and Recovery Options for VSPEX End User Computing for VMware Horizon View available on EMC Online Support. EMC VNX series storage array Common EMC VNX5600 3 x Data Movers (active/standby) 36 x 300 GB 15 k rpm 3.5-inch SAS drives – core desktops 5 x 100 GB 3.5-inch flash drives – FAST Cache 34 x 2 TB 3.
Bills of Materials 198 VMware Horizon View 5.
Appendix B Customer Configuration Data Sheet This appendix presents the following topic: Overview of customer configuration data sheets .............................. 200 VMware Horizon View 5.
Customer Configuration Data Sheet Overview of customer configuration data sheets Before you start the configuration, gather customer-specific network and host configuration information. The following tables provide information on assembling the required network and host address, numbering, and naming information. This worksheet can also be used as a “leave behind” document for future reference. The VNX File and Unified Worksheet should be cross-referenced to confirm customer information. Table 38.
Customer Configuration Data Sheet Table 39. vSphere Server information Server Name Purpose Primary IP Private Net (storage) addresses VMkernel IP vMotion IP vSphere Host 1 vSphere Host 2 … Table 40. Array information Array name Admin account Management IP Storage pool name Datastore name NFS Server IP Table 41. Brocade Network infrastructure information Name Purpose IP Subnet Mask Default Gateway Ethernet switch 1 Ethernet switch 2 … VMware Horizon View 5.
Customer Configuration Data Sheet Table 42. VLAN information Name Network Purpose VLAN ID Allowed Subnets Virtual Machine Networking vSphere Management NFS storage network vMotion Table 43.
Appendix C References This appendix presents the following topic: References 204 VMware Horizon View 5.
References References EMC documentation The following documents, located on EMC Online Support provide additional and relevant information. Access to these documents depends on your login credentials. If you do not have access to a document, contact your EMC representative. EMC Avamar 7.0 Administrator Guide EMC Avamar 7.
References VMware View Persona Management, and VMware View Composer 3.0 Proven Solutions Guide EMC Infrastructure for VMware View 5.1 — EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.0 Reference Architecture EMC Infrastructure for VMware View 5.1 — EMC VNX Series (NFS), VMware vSphere 5.0, VMware View 5.1, VMware View Storage Accelerator, VMware View Persona Management, and VMware View Composer 3.
References Brocade Documentation Brocade VDX Switches and VCS Fabrics related documentation can be found as stated below: Brocade VDX 6740/6740T/ 6740T-1G Switch Data sheet http://www.brocade.com/downloads/documents/data_sheets/product_ data_sheets/vdx-6740-ds.pdf Hardware Reference Manual Brocade VDX 6740 Hardware Reference Manual http://www.brocade.com/downloads/documents/product_manuals/B_V DX/VDX6740_VDX6740T_HardwareManual.
References Brocade Fabric OS (FOS) Guides Fabric OS Administrator’s Guide Supporting Network OS v7.2.0 http://www.brocade.com/downloads/documents/product_manuals/B_SA N/FOS_AdminGd_v720.pdf Fabric OS Command Reference Supporting Network OS v7.2.0 http://www.brocade.com/downloads/documents/product_manuals/B_SA N/FOS_CmdRef_v720.pdf Brocade 6510 QuickStart Guide http://www.brocade.com/downloads/documents/product_manuals/B_SA N/B6510_QuickStartGuide.
References vShield Quick Start Guide vSphere Resource Management vSphere Storage APIs for Array Integration (VAAI) Plug-in vSphere Installation and Setup Guide vSphere Networking vSphere Storage Guide vSphere Virtual Machine Administration vSphere Virtual Machine Management For documentation on Microsoft SQL Server, refer to the following Microsoft websites: www.microsoft.com technet.microsoft.com msdn.microsoft.com 208 VMware Horizon View 5.
Appendix D About VSPEX This appendix presents the following topic: About VSPEX 210 VMware Horizon View 5.
About VSPEX About VSPEX EMC has joined forces with the industry’s leading providers of IT infrastructure to create a complete virtualization solution that accelerates deployment of cloud infrastructure. Built with best-in-class technologies, VSPEX enables faster deployment, more simplicity, greater choice, higher efficiency, and lower risk.