Designing a High Performance Network File Server
2
Executive Summary
During a recent customer sales engagement, HP was asked to deliver an Integrity Superdome system
capable of serving huge amounts of file data to the customer’s production environment. The
customer’s application runs simultaneously on thousands of compute nodes. The compute nodes use
the Network File System (NFS) protocol running over a TCP/IP network to retrieve application data
from a central file server. These compute nodes require a sustained throughput rate of over 3
Gigabytes of file data per second.
After several rounds of benchmarking and tuning, HP delivered a 32-core Integrity Superdome system
capable of serving application data at a rate of over 3 Gigabytes per second to thousands of
NFS/TCP clients. This whitepaper describes the customer’s application requirements, the areas the
benchmark team focused on to achieve the desired results and the final configuration of the Integrity
Superdome system used to meet the customer’s throughput requirements.
While this paper discusses a specific customer engagement involving an Integrity Superdome system
used as an NFS file server, most of the improvements made during the course of this effort were not
specific to NFS, nor were they specific to the Integrity Superdome hardware platform. Many of the
performance improvements made as a result of this engagement could potentially benefit almost any
application running on nearly any system running HP-UX 11i v2.
Customer Application Requirements
This customer’s application runs on thousands of compute nodes distributed across eight physical
subnets. These nodes simultaneously access a shared set of very large data files, ranging in size from
between 60 Gigabytes to over 500 Gigabytes, residing on a central NFS server. In order to take full
advantage of these client system’s computational capabilities, the proposed NFS server needs to
provide over 3 Gigabytes of file data to these clients every second.
Delivering over 3 Gigabytes of data per second requires the server to drive 32 Gigabit Ethernet
adapters at wire speed simultaneously. Since the client systems are physically distributed across eight
separate subnets, the server needs to use a trunking feature, such as Auto Port Aggregation
1
, to allow
multiple Gigabit Ethernet adapters on the server to appear as a single IP instance to each subnet of
clients. The customer also required the data be delivered to these clients using the TCP/IP protocol,
as they do not allow the use of UDP for sensitive file data in their environment.
The customer asked HP to provide an Integrity Superdome server capable of meeting these NFS
throughput requirements.
Note:
There was no emphasis placed on storage during this engagement because
this customer had a preexisting storage infrastructure for the Superdome to
leverage. Also, the customer realized that as the thousands of clients read
the data files, the application data would eventually be completely loaded
into the server’s buffer cache. For this reason the Superdome was
configured with 512 Gigabytes of memory with 90% of the memory
dedicated to buffer cache.
1
HP APA groups multiple physical network links into one logical interface with a single IP address.










