Specifications
DATA CENTER BEST PRACTICES
SAN Design and Best Practices 52 of 84
•They create separate LUNs for each data store and allow VMs to access data directly through N_Port ID
Virtualization (NPIV). The advantage of this approach is that VMs can access data more or less directly
through a virtual HBA. The disadvantage is that there are many more LUNs to provision and manage.
Most VMs today tend to do very little I/O—typically no more than a few MB/sec per VM via very few IOPS. This
allows many VMs to be placed on a single hypervisor platform without regard to the amount of I/O that they
generate. Storage access is not a signicant factor when considering converting a physical server to a virtual
one. More important factors are typically memory usage and IP network usage.
The main storage-related issue when deploying virtualized PC applications is VM migration. If VMs share a LUN,
and a VM is migrated from one hypervisor to another, the integrity of the LUN must be maintained. That means
that both hypervisors must serialize access to the same LUN. Normally this is done through mechanisms such
as SCSI reservations. The more the VMs migrate, the potentially larger the serialization problem becomes. SCSI
reservations can contribute to frame congestion and generally slow down VMs that are accessing the same LUN
from several different hypervisor platforms.
Design Guidelines
•If possible, try to deploy VMs to minimize VM migrations if you are using shared LUNs.
•Use individual LUNs for any I/O-intensive applications such as SQL Server, Oracle databases, and
Microsoft Exchange.
Monitoring
•Use Advanced Performance Monitoring and Brocade Fabric Watch to alert you to excessive levels of SCSI
reservations. These notications can save you a lot of time by identifying VMs and hypervisors that are vying
for access to the same LUN.
Unix Virtualization
Virtualized Unix environments differ from virtualized Windows deployments in a few signicant ways.
First, the Unix VMs and hypervisor platforms tend to be more carefully architected than equivalent Windows
environments, because more mission-critical applications have traditionally run on Unix. Frequently the
performance and resource capacity requirement of the applications are well understood, because of their history
of running on discrete platforms. Historical performance and capacity data will likely be available from the Unix
performance management systems, allowing application architects and administrators to size the hypervisor
platforms for organic growth and headroom for peak processing periods.
Second, VM mobility is not common for workload management in Unix deployments. VMs are moved for
maintenance or recovery reasons only. IBM clearly states, for example, that moving VMs is limited to
maintenance only. Carefully architected hypervisor/application deployments contain a mix of I/O-intensive,
memory-intensive, and processor-intensive workloads. Moving these workloads around disturbs that balance
and potentially leads to performance problems. Problem determination also becomes more difcult once VM
migrations have to be tracked.
Third, virtualized mission-critical Unix applications such as large SQL Server database engines typically do much
more block I/O than their Windows counterparts, both in volume and in transaction rates. Each hypervisor
platform now produces the aggregate I/O of all those mission-critical applications. Backups, especially if they
are host-based through backup clients, are also a serious architectural concern.
Recent Changes
Two technical advances create profound changes to storage deployments for mission-critical Unix applications:
NPIV and storage virtualization.










