Hub/Switch Installation Guide
Chapter 2 HPSS Planning
HPSS Installation Guide September 2002 141
Release 4.5, Revision 2
2.11.12 Cross Cell
CrossCell Trust should be establishedwith the minimalreasonableset of cooperatingpartners (N-
squared problem). Excessive numbers of Cross Cell connections may diminish Security and may
cause performance problems due to Wide Area Network delays. The communication paths
between cooperating cells should be reliable.
Cross Cell Trust must exist to take advantage of the HPSS Federated Name Space facilities.
2.11.13 DFS
DFSperformancefor HPSSisdependentonanumberoffactors:filesettype (mirroredorarchived),
CPU performance, memory throughput rates, DFS client caching, etc. Mirrored filesets will
perform at HPSS rates for name space changes. Name space accesses will perform at normal DFS
rates. Archived filesets will typically perform close to DFS rates for both name space changes and
accesses. For both mirrored and archived filesets, access and changes will perform at DFS rates
when data is resident, but will be delayed if HPSS must stage the data onto the Episode disks.
Whensetting upan aggregate,itis suggestedthat thefragment sizebesetto1024andtheblocksize
be set to 8192. These are the defaults and have been tested much more thoroughly than other
settings.An importantfactortoconsideristhatanyfilesmaller thanthe blocksizecurrentlycan not
be purged from the Episode disk, and setting the blocksize larger than 8192 may cause space and
resource problems on the disk. (This is a limitation of the Episode implementation of XDSM on
which the HPSS/DFS code is implemented).
During testing the biggest gains were realized by altering the DFS client caching. A large memory
cacheinsteadofadiskcachemayimproveperformancedramaticallyiftheclientmachinecanspare
memoryforclient cachingbuffers.For moreinformation onDFS configurationforAIX pleaserefer
to the DCE document “Distributed File Service Administration Guide and Reference”.
SinceHPSS mustreadDFSanodesto determinewhich filestomigrate orpurge,itissuggested that
aggregates kept to a maximum size of 250,000 files and directories. This will allow the migration
and purge algorithms to determine which files to process in a reasonable amount of time. With
current Episode limitations on the amount of space allowed for anodes per aggregate and the
design of the HPSS DFS code, the maximum number of anodes per HPSS-managed aggregate is
around1,000,000 files(2GB/ 2K per migrated file). When planning thesystem, assumethatitmay
notbepossibletoexpandanaggregatetoaccommodatemorefileswhenthislimitisreached.Itmay
be necessary to add a new aggregate. In fact, since migration and purge are aggregate-based, the
system may perform better with data distributed among a larger number of well-balanced
aggregates than with all data concentrated a few large ones.
2.11.14 XFS
XFS performance for HPSS is dependent on a number of factors: CPU performance, available
memory, disk speeds,etc.. XFS archived filesets will typically performclose to nativeXFS rates for
both namespaceand data activity; however, accessingor modifying data forfiles which havebeen
migrated to HPSS and purged from XFS will be delayed while the file’s data is staged.
The XFS HDM keeps an internal record of migration and purge candidates and will therefore be
capable of quickly completing migration and purge runs which would otherwise take a good deal