3.5.1 Matrix Server Administration Guide

Chapter 1: Introduction 2
Copyright © 1999-2007 PolyServe, Inc. All rights reserved.
on a SAN. After a PSFS filesystem has been created on a SAN disk, all
servers in the matrix can mount the filesystem and subsequently
perform concurrent read and/or write operations to that filesystem.
PSFS is a journaling filesystem and provides online crash recovery.
Availability and reliability. Servers and SAN components
(FibreChannel switches and RAID subsystems) can be added to a
matrix with minimal impact, as long as the operation is supported by
the underlying operating system. Matrix Server includes failover
mechanisms that enable the matrix to continue operations without
interruption when various types of failures occur. If network
communications fail between any or all servers in the matrix, Matrix
Server maintains the coherency and integrity of all shared data in the
matrix.
Matrix-wide administration. The Management Console (a Java-based
graphical user interface) and the corresponding command-line
interface enable you to configure and manage the entire matrix either
remotely or from any server in the matrix.
Failover support for network applications. Matrix Server uses virtual
hosts to provide highly available client access to mission-critical data
for Web, e-mail, file transfer, and other TCP/IP-based applications. If a
problem occurs with a network application, with the network
interface used by the virtual host, or with the underlying server,
Matrix Server automatically switches network traffic to another server
to provide continued service.
Administrative event notification. When certain events occur in the
matrix, Matrix Server can send information about the events to the
system administrator via e-mail, a pager, the Management Console, or
another user-defined process.
MxFS-Linux provides the following features:
Scalable NFS client connectivity. Over multiple NFS servers sharing
the same filesystems, MxFS-Linux supports a linearly increasing client
connection load as similarly configured servers are added to the
cluster. A 16-node cluster, serving the same filesystems via NFS, can
support 16 times more NFS clients (with similar workloads and the