Installation guide

14
The first step towards installing the VMA drivers is to ensure that all of the required VMA packages are installed on the
host server prior to the installation and configuration of the VMA driver.
The VMA Linux driver (vtms-linux-driver) must be installed on the host machine if the Memory Array is to be directly
attached to a Linux host.
The VMA Windows Storport driver must be installed on the host machine if the VMA is to be directly attached to a
Windows host.
The VMA arrays are pre-formatted at 65% of the available storage for improved write performance.
Slots capable of bi-directional x8 PCIe provide the best performance. Use the perf_test utility from the VMA software
installation to baseline the system by testing the read bandwidth, write bandwidth, and a mix of reads and writes
against the expected block size of the application.
On the server enable asynchronous I/O.
For Linux installations use the 2.6.32 Kernel when possible, which will provide:
Improved NUMA support
Improved IRQ management
Improved processor/process affinity
Flash I/O awareness
Flash-based memory arrays are designed for 4KB block access or any multiple of 4KB blocks. Smaller block sizes (for
example, 512 bytes) will significantly reduce performance, particularly for writes. Therefore, it is important that you
verify whether the file system and operating system are 4KB aligned. Partitions can easily be out of 4KB alignment
because most operating systems, when creating a file system, do so for traditional RAIDs and solid-state drives; that
is, 63 sectors per track. Two utilities included in the VMA Utilities Package, the vpartial utility and the vring utility,
enable you to identify these issues.
For Linux installations, use the Parted software tool to create 4 to 16 partitions across each array.
Be sure to set the offset in Parted to 1MB to ensure a 4KB boundary for I/O.
512B partitions start at 2048s
4096B partitions start at 256s
For Windows install Windows Server 2008 R2 SP1 Datacenter edition and it will automatically determine alignment
when you create partitions.
Balance the workload evenly across all VMA Arrays.
Depending on your workload set the allocation unit (AU) to 1MB or 4MB. With some database installations, you have
an option to enable software mirroring. Our recommendation is not to implement database software mirroring
because the arrays are designed to meet five 9s of high availability without incurring the I/O overhead of the database
mirroring or the 50% reduction in usable storage space.
It is best to use dual 208V power cords/supplies for most optimal performance.
For applications that require high availability, the use of the OS high availability active/passive software is fully
supported; however, it is not required. This solution is a good match for the simplicity of the HP VMA configuration.
There are also other replication products that are fully supported with VMA arrays.
If installing Linux, HP recommends using the Linux kernel’s Huge Pages.
Test assumptions
These configurations are examples; there are many additional server/storage options which could also meet the
workload requirements.
Testing has not been performed to date on each of these exact configurations. The configurations are based on best
practices, benchmarks, extrapolation based on current test knowledge, and performance assumptions through
discussions with Oracle and HP experts.
No consideration at this point has been given for spares, recovery storage groups, clusters, etc.; nevertheless, all of
these options can be added to the base configurations.