Global File System Red Hat Global File System
Global File System: Red Hat Global File System Copyright © 2007 Red Hat, Inc. This book provides information about installing, configuring, and maintaining Red Hat GFS; (Red Hat Global File System). 1801 Varsity Drive Raleigh, NC 27606-2072 USA Phone: +1 919 754 3700 Phone: 888 733 4281 Fax: +1 919 754 3701 PO Box 13588 Research Triangle Park, NC 27709 USA Documentation-Deployment Copyright © 2007 by Red Hat, Inc.
Table of Contents Introduction ............................................................................................................... vi 1. Audience ....................................................................................................... vi 2. Related Documentation .................................................................................. vi 3. Document Conventions ................................................................................. vii 4. Send in Your Feedback .
Global File System 9.1. Mount with noatime ............................................................................31 9.2. Tune GFS atime Quantum ..................................................................31 10. Suspending Activity on a File System ...........................................................32 11. Displaying Extended GFS Information and Statistics ......................................33 12. Repairing a File System ..................................................................
Introduction Welcome to the Global File System Configuration and Administration document. This book provides information about installing, configuring, and maintaining Red Hat GFS (Red Hat Global File System). Red Hat GFS depends on the cluster infrastructure of Red Hat Cluster Suite. For information about Red Hat Cluster Suite refer to Red Hat Cluster Suite Overview and Configuring and Managing a Red Hat Cluster.
3. Document Conventions Red Hat Cluster Suite. 3. Document Conventions Certain words in this manual are represented in different fonts, styles, and weights. This highlighting indicates that the word is part of a specific category. The categories include the following: Courier font Courier font represents commands, file names and paths, and prompts . When shown as below, it indicates computer output: Desktop Mail about.html backupfiles logs mail paulwesterberg.
4. Send in Your Feedback Important Important information is necessary, but possibly unexpected, such as a configuration change that will not persist after a reboot. Caution A caution indicates an act that would violate your support agreement, such as recompiling the kernel. Warning A warning indicates potential data loss, as may happen when tuning hardware for maximum performance. 4.
5. Recommended References Topic Reference Storage Area Networks (SANs) Designing Storage Area Net- Provides a concise summary works: A Practical Reference of Fibre Channel and IP SAN for Implementing Fibre Chan- Technology. nel and IP SANs, Second Edition by Tom Clark. AddisonWesley, 2003. Applications and High Availability Comment Building SANs with Brocade Fabric Switches by C. Beauchamp, J. Judd, and B. Keo. Syngress, 2001.
Chapter 1. GFS Overview Red Hat GFS is a cluster file system that is available with Red Hat Cluster Suite. Red Hat GFS nodes are configured and managed with Red Hat Cluster Suite configuration and management tools. Red Hat GFS provides data sharing among GFS nodes in a Red Hat cluster. GFS provides a single, consistent view of the file-system name space across the GFS nodes in a Red Hat cluster. GFS allows applications to install and run without much knowledge of the underlying storage infrastructure.
2.1. Superior Performance and Scalability 2. Performance, Scalability, and Economy You can deploy GFS in a variety of configurations to suit your needs for performance, scalability, and economy. For superior performance and scalability, you can deploy GFS in a cluster that is connected directly to a SAN. For more economical needs, you can deploy GFS in a cluster that is connected to a LAN with servers that use GNBD (Global Network Block Device).
2.2. Performance, Scalability, Moderate Price Figure 1.1. GFS with a SAN 2.2. Performance, Scalability, Moderate Price Multiple Linux client applications on a LAN can share the same SAN-based data as shown in Figure 1.2, “GFS and GNBD with a SAN”. SAN block storage is presented to network clients as block storage devices by GNBD servers. From the perspective of a client application, storage is accessed as if it were directly attached to the server in which the application is running.
3. GFS Functions Figure 1.3, “GFS and GNBD with Directly Connected Storage” shows how Linux client applications can take advantage of an existing Ethernet topology to gain shared access to all block storage devices. Client data files and file systems can be shared with GFS on each client. Application failover can be fully automated with Red Hat Cluster Suite. Figure 1.3. GFS and GNBD with Directly Connected Storage 3.
4. GFS Software Subsystems GFS provides the following main functions: • Making a File System • Mounting a File System • Unmounting a File System • GFS Quota Management • Growing a File System • Adding Journals to a File System • Direct I/O • Data Journaling • Configuring atime Updates • Suspending Activity on a File System • Displaying Extended GFS Information and Statistics • Repairing a File System • Context-Dependent Path Names (CDPN) 4. GFS Software Subsystems Table 1.
5. Before Setting Up GFS Software Subsystem Components Description file system. This command can also gather a variety of information about the file system. lock_harness.ko Implements a pluggable lock module interface for GFS that allows for a variety of locking mechanisms to be used (for example, the DLM lock module, lock_dlm.ko). lock_dlm.ko A lock module that implements DLM locking for GFS. It plugs into the lock harness, lock_harness.
5. Before Setting Up GFS hostname and IP address of each GNBD server node for setting up GNBD clients later. For information on using GNBD with GFS, see the Using GNBD with Global File System document. Storage devices and partitions Determine the storage devices and partitions to be used for creating logical volumes (via CLVM) in the file systems.
Chapter 2. System Requirements This chapter describes the system requirements for Red Hat GFS with Red Hat Enterprise Linux 5 and consists of the following sections: • Section 1, “Platform Requirements” • Section 2, “Red Hat Cluster Suite” • Section 3, “Fencing” • Section 4, “Fibre Channel Storage Network” • Section 5, “Fibre Channel Storage Devices” • Section 6, “Network Power Switches” • Section 7, “Console Access” 1. Platform Requirements Table 2.
4. Fibre Channel Storage Network Fencing is configured and managed in Red Hat Cluster Suite. For more information about fencing options, refer to Configuring and Managing a Red Hat Cluster. 4. Fibre Channel Storage Network Table 2.2, “Fibre Channel Network Requirements” shows requirements for GFS nodes that are to be connected to a Fibre Channel SAN.
7. Console Access You can fence GFS nodes with power switches and fencing agents available with Red Hat Cluster Suite. For more information about fencing with network power switches, refer to Configuring and Managing a Red Hat Cluster. 7. Console Access Make sure that you have console access to each GFS node. Console access to each node ensures that you can monitor nodes and troubleshoot problems. 8. Installing GFS Installing GFS consists of installing Red Hat GFS RPMs on nodes in a Red Hat cluster.
Chapter 3. Getting Started This chapter describes procedures for initial setup of GFS and contains the following sections: • Section 1, “Prerequisite Tasks” • Section 2, “Initial Setup Tasks” 1. Prerequisite Tasks Before setting up Red Hat GFS, make sure that you have noted the key characteristics of the GFS nodes (refer to Section 5, “Before Setting Up GFS”) and have loaded the GFS modules into each GFS node.Also, make sure that the clocks on the GFS nodes are synchronized.
2. Initial Setup Tasks scripts, refer to Configuring and Managing a Red Hat Cluster. 2. Create GFS file systems on logical volumes created in Step 1. Choose a unique name for each file system. For more information about creating a GFS file system, refer to Section 1, “Making a File System”. Command usage: gfs_mkfs -p lock_dlm -t ClusterName:FSName -j NumberJournals BlockDevice 3. At each node, mount the GFS file systems.
Chapter 4.
Examples Warning Make sure that you are very familiar with using the LockProtoName and LockTableName parameters. Improper use of the LockProtoName and LockTableName parameters may cause file system or lock space corruption. LockProtoName Specifies the name of the locking protocol (for example, lock_dlm) to use.
Complete Options Flag Parameter Description -b BlockSize Sets the file-system block size to BlockSize. Default block size is 4096 bytes. -D Enables debugging output. -h Help. Displays available options. -J MegaBytes -j Number Specifies the size of the journal in megabytes. Default journal size is 128 megabytes. The minimum size is 32 megabytes. Specifies the number of journals to be created by the gfs_mkfs command. One journal is required for each node that mounts the file system.
2. Mounting a File System Table 4.1. Command Options: gfs_mkfs 2. Mounting a File System Before you can mount a GFS file system, the file system must exist (refer to Section 1, “Making a File System”), the volume where the file system exists must be activated, and the supporting clustering and locking systems must be started (refer to Chapter 3, Getting Started and Configuring and Managing a Red Hat Cluster.
Complete Usage Note The mount command is a Linux system command. In addition to using GFS-specific options described in this section, you can use other, standard, mount command options (for example, -r). For information about other Linux mount command options, see the Linux mount man page. Table 4.2, “GFS-Specific Mount Options” describes the available GFS-specific -o that can be passed to GFS at mount time. option values Option Description acl Allows manipulating file ACLs.
3. Unmounting a File System Option Description tailed troubleshooting information. Use this option with care. Note: This option is turned on automatically if lock_nolock locking is specified; however, you can override it by using the ignore_local_fs option. Upgrade the on-disk format of the file system so that it can be used by newer versions of GFS. upgrade Table 4.2. GFS-Specific Mount Options 3.
4.1. Setting Quotas 4.1. Setting Quotas Two quota settings are available for each user ID (UID) or group ID (GID): a hard limit and a warn limit. A hard limit is the amount of space that can be used. The file system will not let the user or group use more than that amount of disk space. A hard limit value of zero means that no limit is enforced. A warn limit is usually a value less than the hard limit.
4.2. Displaying Quota Limits and Usage gfs. gfs_quota limit -u Bert -l 1024 -f /gfs This example sets the warn limit for group ID 21 to 50 kilobytes on file system /gfs. gfs_quota warn -g 21 -l 50 -k -f /gfs 4.2. Displaying Quota Limits and Usage Quota limits and current usage can be displayed for a specific user or group using the gfs_quota get command.
4.3. Synchronizing Quotas system blocks, respectively. User A user name or ID to which the data is associated. Group A group name or ID to which the data is associated. LimitSize The hard limit set for the user or group. This value is zero if no limit has been set. Value The actual amount of disk space used by the user or group. Comments When displaying quota information, the gfs_quota command does not resolve UIDs and GIDs into names if the -n option is added to the command line.
4.4. Disabling/Enabling Quota Enforcement Usage Synchronizing Quota Information gfs_quota sync -f MountPoint MountPoint Specifies the GFS file system to which the actions apply. Tuning the Time Between Synchronizations gfs_tool settune MountPoint quota_quantum Seconds MountPoint Specifies the GFS file system to which the actions apply. Seconds Specifies the new time period between regular quota-file synchronizations by GFS. Smaller values may increase contention and slow down performance.
4.5. Disabling/Enabling Quota Accounting quota_enforce {0|1} 0 = disabled 1 = enabled Comments A value of 0 disables enforcement. Enforcement can be enabled by running the command with a value of 1 (instead of 0) as the final command line parameter. Even when GFS is not enforcing quotas, it still keeps track of the file-system usage for all users and groups so that quotausage information does not require rebuilding after re-enabling quotas.
5. Growing a File System Note Initializing the quota file requires scanning the entire file system and may take a long time. Examples This example disables quota accounting on file system /gfs on a single node. gfs_tool settune /gfs quota_account 0 This example enables quota accounting on file system /gfs on a single node and initializes the quota file. # gfs_tool settune /gfs quota_account 1 # gfs_quota init -f /gfs 5.
Examples • Back up important data on the file system. • Display the volume that is used by the file system to be expanded by running a gfs_tool MountPoint command. • Expand the underlying cluster volume with LVM. For information on administering LVM volumes, see the LVM Administrator's Guide df After running the gfs_grow command, run a df command to check that the new space is now available in the file system. Examples In this example, the file system on the /gfs1 directory is expanded.
Usage 6. Adding Journals to a File System The gfs_jadd command is used to add journals to a GFS file system after the device where the file system resides has been expanded. Running a gfs_jadd command on a GFS file system uses space between the current end of the file system and the end of the device where the file system resides. When the fill operation is completed, the journal index is updated.
Complete Usage In this example, the current state of the file system on the /gfs1 directory is checked for the new journals. gfs_jadd -Tv /gfs1 Complete Usage gfs_jadd [Options] {MountPoint | Device} [MountPoint | Device] MountPoint Specifies the directory where the GFS file system is mounted. Device Specifies the device node of the file system. Table 4.4, “GFS-specific Options Available When Adding Journals” describes the GFS-specific options that can be used when adding journals to a GFS file system.
7.1. O_DIRECT 7. Direct I/O Direct I/O is a feature of the file system whereby file reads and writes go directly from the applications to the storage device, bypassing the operating system read and write caches. Direct I/O is used only by applications (such as databases) that manage their own caches. An application invokes direct I/O by opening a file with the O_DIRECT flag.
7.3. GFS Directory Attribute In this example, the command sets the directio flag on the file named datafile in directory / gfs1. gfs_tool setflag directio /gfs1/datafile 7.3. GFS Directory Attribute The gfs_tool command can be used to assign (set) a direct I/O attribute flag, inherit_directio, to a GFS directory. Enabling the inherit_directio flag on a directory causes all newly created regular files in that directory to automatically inherit the directio flag.
Usage directory (and all its subdirectories). Existing files with zero length can also have data journaling turned on or off. Using the gfs_tool command, data journaling is enabled on a directory (and all its subdirectories) or on a zero-length file by setting the inherit_jdata or jdata attribute flags to the directory or file, respectively. The directory and file attribute flags can also be cleared.
9.1. Mount with noatime every time a file is read, its inode needs to be updated. Because few applications use the information provided by atime, those updates can require a significant amount of unnecessary write traffic and file-locking traffic. That traffic can degrade performance; therefore, it may be preferable to turn off atime updates. Two methods of reducing the effects of atime updating are available: • Mount with noatime • Tune GFS atime quantum 9.1.
10. Suspending Activity on a File System be set on each node and each time the file system is mounted. (The setting is not persistent across unmounts.) Usage Displaying Tunable Parameters gfs_tool gettune MountPoint MountPoint Specifies the directory where the GFS file system is mounted. Changing the atime_quantum Parameter Value gfs_tool settune MountPoint atime_quantum Seconds MountPoint Specifies the directory where the GFS file system is mounted. Seconds Specifies the update period in seconds.
Examples gfs_tool unfreeze MountPoint MountPoint Specifies the file system. Examples This example suspends writes to file system /gfs. gfs_tool freeze /gfs This example ends suspension of writes to file system /gfs. gfs_tool unfreeze /gfs 11. Displaying Extended GFS Information and Statistics You can use the gfs_tool command to gather a variety of details about GFS. This section describes typical use of the gfs_tool command for displaying statistics, space usage, and extended status.
Examples File Specifies the file from which to get information. The gfs_tool command provides additional action flags (options) not listed in this section. For more information about other gfs_tool flags, refer to the gfs_tool man page. Examples This example reports extended file system usage about file system /gfs. gfs_tool df /gfs This example reports extended file status about file /gfs/datafile. gfs_tool stat /gfs/datafile 12.
Example Usage gfs_fsck -y BlockDevice -y The -y flag causes all questions to be answered with yes. With the -y flag specified, the gfs_fsck command does not prompt you for an answer before making changes. BlockDevice Specifies the block device where the GFS file system resides. Example In this example, the GFS file system residing on block device /dev/vg01/lvol0 is repaired. All queries to repair are automatically answered with yes. gfs_fsck -y /dev/vg01/lvol0 13.
Example Values”) to represent one of multiple existing files or directories. This string is not the name of an actual file or directory itself. (The real files or directories must be created in a separate step using names that correlate with the type of variable used.) LinkName Specifies a name that will be seen and used by applications and will be followed to get to one of the multiple real files or directories.
Example drwxr-xr-x 2 root root 3864 Apr 25 14:06 n03/ n01# touch /gfs/log/fileA n02# touch /gfs/log/fileB n03# touch /gfs/log/fileC n01# ls /gfs/log/ fileA n02# ls /gfs/log/ fileB n03# ls /gfs/log/ fileC 37
direct I/O, 28 directory attribute, 29 file attribute, 28 O_DIRECT, 28 growing, 24 making, 13 mounting, 16 quota management, 18 disabling/enabling quota accounting, 23 disabling/enabling quota enforcement, 22 displaying quota limits, 20 setting quotas, 19 synchronizing quotas, 21 repairing, 34 suspending activity, 32 unmounting, 18 Index A adding journals to a file system, 26 atime, configuring updates, 30 mounting with noatime, 31 tuning atime quantum, 31 audience, vi C CDPN variable values table, 36 con
initial tasks setup, initial, 11 introduction, vi audience, vi references, viii S N setup, initial initial tasks, 11 suspending activity on a file system, 32 system requirements, 8 console access, 10 fencing, 8 fibre channel storage devices, 9 fibre channel storage network, 9 network power switches, 9 platform, 8 Red Hat Cluster Suite, 8 network power switches system requirements, 9 T M making a file system, 13 managing GFS, 13 mount table, 17 mounting a file system, 16 tables CDPN variable values, 3