3.4.0 MxFS for CIFS Administration Guide
Chapter 1: Introduction 2
Copyright © 1999-2006 PolyServe, Inc. All rights reserved.
where it will continue to provide access to the same PSFS filesystem data
under the same name/IP-address pair.
This deployment method works well with clients running modern
Windows operating systems such as Windows XP and Windows 2000. To
take advantage of the transparent failover feature, clients must connect to
the Virtual CIFS Server using either the Fully Qualified Domain Name
(FQDN) or the IP address. This helps avoid conflicts with legacy
(NetBIOS) network-name resolution methods. If NetBIOS name
resolution is a requirement, then the Matrix File Share deployment
method should be used instead.
Matrix File Shares
Matrix File Shares are Windows CIFS shares associated with a Matrix
Server filesystem health monitor. Clients connect to Matrix File Shares
using the network name or IP address of any physical (rather than
virtual) server in the cluster. Each node in the cluster provides access to
the same PSFS filesystems through its Matrix File Shares.
For high availability, Matrix File Shares are designed to be deployed with
a connection-oriented load balancer such as the Microsoft Distributed File
System (DFS). When deployed with a DFS front end, client connection
requests to a single network name (provided by DFS) will be evenly
distributed among the nodes in the cluster. On failure of a node, DFS will
detect the loss of network connectivity and route new connection and
re-connection requests to the remaining nodes in the cluster.
If a node in the cluster loses access to the PSFS filesystem (for example,
because of a SAN problem) but it is otherwise healthy, the Matrix File
Share monitor will tear down the associated CIFS share to prevent future
connection and re-connection requests from being directed to a node that
has lost access to the underlying shared filesystem.
When the node regains access to the shared filesystem, the Matrix File
Share monitor automatically recreates the CIFS share and the node then
starts handling requests.