Red Hat Enterprise Linux 6 Storage Administration Guide Deploying and configuring single-node storage in Red Hat Enterprise Linux 6 Edition 2 Red Hat Subject Matter ExpertsJosef Bacik Kamil Dudka Hans de Goede Daniel Novotny Nathan Straz Contributors Michael Christie Rob Evers David Howells Jeff Moyer Eric Sandeen Doug Ledford David Wysochanski Sachin Prabhu David Lehman Mike Snitzer
Red Hat Enterprise Linux 6 Storage Administration Guide Deploying and configuring single-node storage in Red Hat Enterprise Linux 6 Edition 2 Jo sef Bacik Server Develo pment Kernel File System jwhiter@redhat.co m Disk Quo tas Kamil Dudka Base Operating System Co re Services - BRNO kdudka@redhat.co m Access Co ntro l Lists Hans de Go ede Base Operating System Installer hdego ede@redhat.co m Partitio ns Do ug Ledfo rd Server Develo pment Hardware Enablement dledfo rd@redhat.
revers@redhat.co m Online Sto rage David Ho wells Server Develo pment Hardware Enablement dho wells@redhat.co m FS-Cache David Lehman Base Operating System Installer dlehman@redhat.co m Sto rage co nfiguratio n during installatio n Jeff Mo yer Server Develo pment Kernel File System jmo yer@redhat.co m So lid-State Disks Eric Sandeen Server Develo pment Kernel File System esandeen@redhat.co m ext3, ext4 , XFS, Encrypted File Systems Mike Snitzer Server Develo pment Kernel Sto rage msnitzer@redhat.
Legal Notice Co pyright © 20 13 Red Hat Inc. and o thers. This do cument is licensed by Red Hat under the Creative Co mmo ns Attributio n-ShareAlike 3.0 Unpo rted License. If yo u distribute this do cument, o r a mo dified versio n o f it, yo u must pro vide attributio n to Red Hat, Inc. and pro vide a link to the o riginal. If the do cument is mo dified, all Red Hat trademarks must be remo ved.
T able of Cont ent s T able of Contents .Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8. . . . . . . . . . 1. Do c ument Co nventio ns 8 1.1. Typ o g rap hic Co nventio ns 8 1.2. Pull-q uo te Co nventio ns 9 1.3. No tes and Warning s 10 2 . G etting Help and G iving Feed b ac k 10 2 .1. Do Yo u Need Help ? 10 2 .2. We Need Feed b ac k 11 . .hapt C . . . .er . .1. .. O .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide 5 .1. Creating an Ext3 File Sys tem 29 5 .2. Co nverting to an Ext3 File Sys tem 29 5 .3. Reverting to an Ext2 File Sys tem 30 . .hapt C . . . .er . .6. .. T. he . . . Ext . . . 4. .File . . . Syst . . . . em . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 ........... 6 .1. Creating an Ext4 File Sys tem 33 6 .2. Mo unting an Ext4 File Sys tem 34 .Writ . . .e.
T able of Cont ent s .7.5. NFS o ver RDMA 9 9 .8 . Sec uring NFS 68 69 9 .8 .1. NFS Sec urity with AUTH_SYS and exp o rt c o ntro ls 9 .8 .2. NFS s ec urity with AUTH_G SS 69 70 9 .8 .2.1. NFS s ec urity with NFSv4 9 .8 .3. File Permis s io ns 70 71 9 .9 . NFS and rp c b ind 9 .9 .1. Tro ub les ho o ting NFS and rp c b ind 9 .10 . Referenc es 71 71 72 . . . .alled Inst . . . . .Document . . . . . . . . .at . .ion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide 13.2. Creating a Partitio n 13.2.1. Making the Partitio n 13.2.2. Fo rmatting and Lab eling the Partitio n 13.2.3. Ad d to /etc /fs tab 13.3. Remo ving a Partitio n 92 92 93 93 94 13.4. Res iz ing a Partitio n 94 . .hapt C . . . .er . .1. 4. .. LVM . . . . (Logical . . . . . . . .Volume . . . . . . .Manager) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9. 6. . . . . . . . . .
T able of Cont ent s 17.2. RAID Levels and Linear Sup p o rt 17.3. Linux RAID Sub s ys tems 127 129 . . . . . .Hardware Linux . . . . . . . . RAID . . . . . cont . . . . roller . . . . . drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2. 9. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1.2. 9. . . .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .1. 54 Swap ........... . .hapt C . . . .er . .2. 2. .. Writ . . . .e. Barriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1. 55 ........... 2 2.1. Imp o rtanc e o f Write Barriers 155 .How . .
T able of Cont ent s 2 5.8 . Co nfig uring an FCo E Interfac e to Auto matic ally Mo unt at Bo o t 179 2 5.9 . Sc anning Sto rag e Interc o nnec ts 18 1 2 5.10 . iSCSI Dis c o very Co nfig uratio n 2 5.11. Co nfig uring iSCSI O fflo ad and Interfac e Bind ing 18 1 18 2 2 5.11.1. Viewing Availab le ifac e Co nfig uratio ns 2 5.11.2. Co nfig uring an ifac e fo r So ftware iSCSI 18 3 18 4 2 5.11.3. Co nfig uring an ifac e fo r iSCSI O fflo ad 18 5 5.11.4.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Preface 1. Document Convent ions This manual uses several conventions to highlight certain words and phrases and draw attention to specific pieces of information. 1.1. T ypographic Convent ions Four typographic conventions are used to call attention to specific words and phrases. These conventions, and the circumstances they apply to, are as follows.
Preface C haracter T abl e. D ouble-click this highlighted character to place it in the T ext to co py field and then click the C o py button. Now switch back to your document and choose Ed it → Past e from the g ed it menu bar. The above text includes application names; system-wide menu names and items; application-specific menu names; and buttons and text found within a GUI interface, all presented in proportional bold and all distinguishable by context.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide before, " "so cannot be deassigned\n", __func__); r = -EINVAL; goto out; } kvm_deassign_device(kvm, match); kvm_free_assigned_device(kvm, match); o ut: mutex_unlock(& kvm->lock); return r; } 1.3. Not es and Warnings Finally, we use three visual styles to draw attention to information that might otherwise be overlooked. Note Notes are tips, shortcuts or alternative approaches to the task at hand.
Preface Red Hat also hosts a large number of electronic mailing lists for discussion of Red Hat software and technology. You can find a list of publicly available mailing lists at https://www.redhat.com/mailman/listinfo. Click the name of any mailing list to subscribe to that list or to access the list archives. 2.2. We Need Feedback If you find a typographical error in this manual, or if you have thought of a way to make this manual better, we would love to hear from you.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 1. Overview The Storage Administration Guide contains extensive information on supported file systems and data storage features in Red Hat Enterprise Linux 6. This book is intended as a quick reference for administrators managing single-node (that is, non-clustered) storage solutions. The Storage Administration Guide is split into two parts: File Systems, and Storage Administration.
Net work Block St orage The ext4 file system is fully supported in this release. It is now the default file system of Red Hat Enterprise Linux 6, supporting an unlimited number of subdirectories. It also features more granular timestamping, extended attributes support, and quota journaling. For more information on ext4, refer to Chapter 6, The Ext4 File System. Network Block Storage Fibre-channel over Ethernet is now supported.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Part I. File Systems The File Systems section explains file system structure followed by two technology previews: eCryptfs and Btrfs. This is followed by the file systems Red Hat fully supports: ext3, ext4, global file system 2, XFS, NFS, and FS-Cache.
Chapt er 2 . File Syst em St ruct ure and Maint enance Chapter 2. File System Structure and Maintenance The file system structure is the most basic level of organization in an operating system. The way an operating system interacts with its users, applications, and security model nearly always depends on how the operating system organizes files on storage devices. Providing a common file system structure ensures users and programs can access and write files.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide 11675568 6272120 4810348 57% / /dev/sda1 100691 9281 86211 10% /boot 322856 0 322856 0% /dev/shm none By default, d f shows the partition size in 1 kilobyte blocks and the amount of used and available disk space in kilobytes. To view the information in megabytes and gigabytes, use the command d f h. The -h argument stands for " human-readable" format. The output for d f -h looks similar to the following: Examp le 2.2.
Chapt er 2 . File Syst em St ruct ure and Maint enance Fig u re 2.1. G N O ME Syst em Mo n it o r File Syst ems t ab 2 .1 .1 .2 . T he /bo o t/ Dire ct o ry The /bo o t/ directory contains static files required to boot the system, for example, the Linux kernel. These files are essential for the system to boot properly. Warning D o not remove the /bo o t/ directory. D oing so renders the system unbootable. 2 .1 .1 .3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide drive), and a pop-up window displaying the contents appears. T ab le 2.1. Examp les o f co mmo n f iles in t h e /d ev d irect o ry File D escrip t io n /dev/hda /dev/hdb /dev/tty0 /dev/tty1 /dev/sda The master device on the primary ID E channel. The slave device on the primary ID E channel. The first virtual console. The second virtual console. The first device on the primary SCSI or SATA channel. The first parallel port. Serial port.
Chapt er 2 . File Syst em St ruct ure and Maint enance /o pt/packagename/man/. 2 .1 .1 .9 . T he /pro c/ Dire ct o ry The /pro c/ directory contains special files that either extract information from the kernel or send information to it. Examples of such information include system memory, CPU information, and hardware configuration. For more information about /pro c/, refer to Section 2.3, “ The /proc Virtual File System” . 2 .1 .1 .1 0 .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Note The default httpd install uses /var/www/html for served content. 2 .1 .1 .1 2 . T he /sys/ Dire ct o ry The /sys/ directory utilizes the new sysfs virtual file system specific to the 2.6 kernel. With the increased support for hot plug hardware devices in the 2.6 kernel, the /sys/ directory contains information similar to that held by /pro c/, but displays a hierarchical view of device information specific to hot plug devices. 2 .1 .1 .1 3.
Chapt er 2 . File Syst em St ruct ure and Maint enance /usr/src This directory stores source code. /usr/tmp lin ked t o /var/tmp This directory stores temporary files. The /usr/ directory should also contain a /l o cal / subdirectory. As per the FHS, this subdirectory is used by the system administrator when installing software locally, and should be safe from being overwritten during system updates.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide /var/ftp/ /var/g d m/ /var/kerbero s/ /var/l i b/ /var/l o cal / /var/l o ck/ /var/l o g / /var/mai l linked to /var/spo o l /mai l / /var/mai l man/ /var/named / /var/ni s/ /var/o pt/ /var/preserve/ /var/run/ /var/spo o l / /var/tmp/ /var/tux/ /var/www/ /var/yp/ System log files, such as messag es and l astl o g , go in the /var/l o g / directory. The /var/l i b/rpm/ directory contains RPM system databases.
Chapt er 2 . File Syst em St ruct ure and Maint enance /var/spo o l /po stfi x/ /var/spo o l /repackag e/ /var/spo o l /rwho / /var/spo o l /samba/ /var/spo o l /sq ui d / /var/spo o l /sq ui rrel mai l / /var/spo o l /up2d ate/ /var/spo o l /uucp/ /var/spo o l /uucppubl i c/ /var/spo o l /vbo x/ 2.2. Special Red Hat Ent erprise Linux File Locat ions Red Hat Enterprise Linux extends the FHS structure slightly to accommodate special files.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Contains current information on multiple-disk or RAID configurations on the system, if they exist. /p ro c/mo u n t s Lists all mounts currently used by the system. /p ro c/p art it io n s Contains partition block allocation information. For more information about the /pro c file system, refer to the Red Hat Enterprise Linux 6 Deployment Guide. 2.4 .
Chapt er 3. Encrypt ed File Syst em Chapter 3. Encrypted File System Red Hat Enterprise Linux 6 provides a technology preview of eCryptfs, a " pseudo-file system" which provides data and filename encryption on a per-file basis. The term " pseudo-file system" refers to the fact that eCryptfs does not have an on-disk format; rather, it is a file system layer that resides on top of an actual file system. The eCryptfs layer provides encryption capabilities.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide After the last step of an interactive mount, mo unt will display all the selections made and perform the mount. This output consists of the command-line option equivalents of each chosen setting.
Chapt er 4 . Bt rfs Chapter 4. Btrfs Btrfs is a new local file system under active development. It aims to provide better performance and scalability which will in turn benefit users. Note Btrfs is not a production quality file system at this point. With Red Hat Enterprise Linux 6 it is at a technology preview stage and as such is only being built for Intel 64 and AMD 64. 4 .1. Bt rfs Feat ures Several utilities are built in to Btrfs to provide ease of administration for system administrators.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 5. The Ext3 File System The ext3 file system is essentially an enhanced version of the ext2 file system. These improvements provide the following advantages: Availab ilit y After an unexpected power failure or system crash (also called an unclean system shutdown), each mounted ext2 file system on the machine must be checked for consistency by the e2fsck program.
Chapt er 5. T he Ext 3 File Syst em The Red Hat Enterprise Linux 6 version of ext3 features the following updates: D ef au lt In o d e Siz es C h an g ed The default size of the on-disk inode has increased for more efficient storage of extended attributes, for example, ACLs or SELinux attributes. Along with this change, the default number of inodes created on a file system of a given size has been decreased. The inode size may be selected with the mke2fs -I option or specified in /etc/mke2fs.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Note A default installation of Red Hat Enterprise Linux uses ext4 for all file systems. However, to convert ext2 to ext3, always use the e2fsck utility to check your file system before and after using tune2fs. Before trying to convert ext2 to ext3, back up all file systems in case any errors occur. In addition, Red Hat recommends creating a new ext3 file system and migrating data to it, instead of converting from ext2 to ext3 whenever possible.
Chapt er 5. T he Ext 3 File Syst em # mo unt -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point In the above command, replace /mount/point with the mount point of the partition. Note If a . jo urnal file exists at the root level of the partition, delete it. To permanently change the partition to ext2, remember to update the /etc/fstab file, otherwise it will revert back after booting.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 6. The Ext4 File System The ext4 file system is a scalable extension of the ext3 file system, which was the default file system of Red Hat Enterprise Linux 5. Ext4 is the default file system of Red Hat Enterprise Linux 6, and can support files and file systems up to 16 terabytes in size.
Chapt er 6 . T he Ext 4 File Syst em The ext4 file system also supports the following: Extended attributes (xattr) — This allows the system to associate several additional name and value pairs per file. Quota journaling — This avoids the need for lengthy quota consistency checks after a crash. Note The only supported journaling mode in ext4 is d ata= o rd ered (default). Subsecond timestamps — This gives timestamps to the subsecond. 6.1.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide For striped block devices (for example, RAID 5 arrays), the stripe geometry can be specified at the time of file system creation. Using proper stripe geometry greatly enhances the performance of an ext4 file system. When creating file systems on LVM or MD volumes, mkfs. ext4 chooses an optimal geometry. This may also be true on some hardware RAID s which export geometry information to the operating system.
Writ e Barriers By default, ext4 uses write barriers to ensure file system integrity even when power is lost to a device with write caches enabled. For devices without write caches, or with battery-backed write caches, disable barriers using the no barri er option, as in: # mo unt -o no barri er /d ev/device /mount/point For more information about write barriers, refer to Chapter 22, Write Barriers. 6.3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide LABEL=/data tmpfs devpts sysfs proc LABEL=SWAP-sda5 /dev/sda6 # fdisk -l Device Boot /dev/sda1 * /dev/sda2 /dev/sda3 /dev/sda4 /dev/sda5 Solaris /dev/sda6 /data /dev/shm /dev/pts /sys /proc swap /backup-files ext3 defaults 0 0 tmpfs defaults 0 0 devpts gid=5,mode=620 0 0 sysfs defaults 0 0 proc defaults 0 0 swap defaults 0 0 ext3 defaults 0 0 Start 1 14 1926 3201 3201 End 13 1925 3200 4864 3391 Blocks 104391 15358140 10241437+ 13366080 15341
Writ e Barriers Note If using standard redirection, the '-f' option must be passed separately. # dump -0u -f - /dev/sda1 | ssh root@ remoteserver.example.com dd of=/tmp/sda1.dump 6.5. Rest ore an ext 2/3/4 File Syst em Pro ced u re 6 .2. R est o re an ext 2/3/4 File Syst em Examp le 1. If you are restoring an operating system partition, bootup your system into Rescue Mode. This step is not required for ordinary data partitions. 2. Rebuild sda1/sda2/sda3/sda4/sda5 by using the fd i sk command.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # # # # mkdir mount mkdir mount /mnt/sda3 -t ext3 /dev/sda3 /mnt/sda3 /backup-files -t ext3 /dev/sda6 /backup-files 6. Restore the data. # # # # # # cd /mnt/sda1 restore -rf /backup-files/sda1.dump cd /mnt/sda2 restore -rf /backup-files/sda2.dump cd /mnt/sda3 restore -rf /backup-files/sda3.dump If you want to restore from a remote host or restore from a backup file on a remote host you can use either ssh or rsh.
Writ e Barriers e2imag e Saves critical ext2, ext3, or ext4 file system metadata to a file. For more information about these utilities, refer to their respective man pages.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 7. Global File System 2 The Red Hat Global File System 2 (GFS2) is a native file system that interfaces directly with the Linux kernel file system interface (VFS layer). When implemented as a cluster file system, GFS2 employs distributed metadata and multiple journals. GFS2 is based on 64-bit architecture, which can theoretically accommodate an 8 exabyte file system.
Chapt er 8 . T he XFS File Syst em Chapter 8. The XFS File System XFS is a highly scalable, high-performance file system which was originally designed at Silicon Graphics, Inc. It was created to support extremely large filesystems (up to 16 exabytes), files (8 exabytes) and directory structures (tens of millions of entries). Main Feat u res XFS supports metadata journaling, which facilitates quicker crash recovery. The XFS file system can also be defragmented and enlarged while mounted and active.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Examp le 8.1. mkfs. xfs co mman d o u t p u t Below is a sample output of the mkfs.
Writ e Barriers # mo unt /d ev/device /mount/point XFS also supports several mount options to influence behavior. XFS allocates inodes to reflect their on-disk location by default. However, because some 32-bit userspace applications are not compatible with inode numbers greater than 2 32 , XFS will allocate all inodes in disk locations which result in 32-bit inode numbers.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide df Shows free and used counts for blocks and inodes. In contrast, xfs_q uo ta also has an expert mode. The sub-commands of this mode allow actual configuration of limits, and are available only to users with elevated privileges. To use expert mode sub-commands interactively, run xfs_q uo ta -x. Expert mode sub-commands include: rep o rt /path Reports quota information for a specific file system. limit Modify quota limits.
Set t ing Project Limit s Important While real-time blocks (rtbhard /rtbso ft) are described in man xfs_q uo ta as valid units when setting quotas, the real-time sub-volume is not enabled in this release. As such, the rtbhard and rtbso ft options are not applicable. Setting Project Limits Before configuring limits for project-controlled directories, add them first to /etc/pro jects. Project names can be added to/etc/pro jecti d to map project ID s to project names.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # xfs_repai r /dev/device The xfs_repai r utility is highly scalable and is designed to repair even very large file systems with many inodes efficiently. Unlike other Linux file systems, xfs_repai r does not run at boot time, even when an XFS file system was not cleanly unmounted. In the event of an unclean unmount, xfs_repai r simply replays the log at mount time, ensuring a consistent file system.
Set t ing Project Limit s XFS file system backup and restoration involves two utilities: xfsd ump and xfsresto re. To backup or dump an XFS file system, use the xfsd ump utility. Red Hat Enterprise Linux 6 supports backups to tape drives or regular file images, and also allows multiple dumps to be written to the same tape. The xfsd ump utility also allows a dump to span multiple tapes, although only one dump can be written to a regular file.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide start: ino 0 offset 0 end: ino 1 offset 0 interrupted: NO media files: 1 media file 0: mfile index: 0 mfile type: data mfile size: 21016 mfile start: ino 0 offset 0 mfile end: ino 1 offset 0 media label: "my_dump_media_label" media id: 4a518062-2a8f-4f17-81fd-bb1eb2e3cb4f xfsrestore: Restore Status: SUCCESS Simple Mode for xfsresto re The simple mode allows users to restore an entire file system from a level 0 dump.
Simple Mode for xfsrest ore For more information about dumping and restoring XFS file systems, refer to man xfsd ump and man xfsresto re. 8.8. Ot her XFS File Syst em Ut ilit ies Red Hat Enterprise Linux 6 also features other utilities for managing XFS file systems: xf s_f sr Used to defragment mounted XFS file systems. When invoked with no arguments, xfs_fsr defragments all regular files in all mounted XFS file systems.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 9. Network File System (NFS) A Network File System (NFS) allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network. This chapter focuses on fundamental NFS concepts and supplemental information. 9.1. How NFS Works Currently, there are three versions of NFS.
Chapt er 9 . Net work File Syst em (NFS) Important In order for NFS to work with a default installation of Red Hat Enterprise Linux with a firewall enabled, configure IPTables with the default TCP port 2049. Without proper IPTables configuration, NFS will not function properly. The NFS initialization script and rpc. nfsd process now allow binding to any specified port during system start up. However, this can be error-prone if the port is unavailable, or if it conflicts with another daemon. 9.1.1.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide The following RPC processes facilitate NFS services: rp c.mo u n t d This process is used by an NFS server to process MO UNT requests from NFSv2 and NFSv3 clients. It checks that the requested NFS share is currently exported by the NFS server, and that the client is allowed to access it. If the mount request is allowed, the rpc.mountd server replies with a Success status and provides the Fi l e-Hand l e for this NFS share back to the NFS client.
Chapt er 9 . Net work File Syst em (NFS) $ l smo d | g rep nfs_l ayo ut_nfsv4 1_fi l es Another way to verify a successful NFSv4.1 mount is with the mo unt command. The mount entry in the output should contain mi no rversi o n= 1. Important The protocol allows for three possible pNFS layout types: files, objects, and blocks. However the Red Hat Enterprise Linux 6.4 client only supports the files layout type, so will use pNFS only when the server also supports the files layout type.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide 9.3.1. Mount ing NFS File Syst ems using /etc/fstab An alternate way to mount an NFS share from another machine is to add a line to the /etc/fstab file. The line must state the hostname of the NFS server, the directory on the server being exported, and the directory on the local machine where the NFS share is to be mounted. You must be root to modify the /etc/fstab file. Examp le 9 .1.
Chapt er 9 . Net work File Syst em (NFS) Important The nfs-utils package is now a part of both the 'NFS file server' and the 'Network File System Client' groups. As such, it is no longer installed by default with the Base group. Ensure that nfs-utils is installed on the system first before attempting to automount an NFS share. autofs is also part of the 'Network File System Client' group. auto fs uses /etc/auto . master (master map) as its default primary configuration file.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide increasingly better at handling the NSS configuration, it is still not complete. Autofs version 5, on the other hand, is a complete implementation. Refer to man nsswi tch. co nf for more information on the supported syntax of this file. Not all NSS databases are valid map sources and the parser will reject ones that are invalid. Valid sources are files, yp, ni s, ni spl us, l d ap, and hesi o d .
Chapt er 9 . Net work File Syst em (NFS) The following is a sample line from /etc/auto . master file (displayed with cat /etc/auto . master): /home /etc/auto.misc The general format of maps is similar to the master map, however the " options" appear between the mount point and the location instead of at the end of the entry as in the master map: mount-point [options] location The variables used in this format are: mount-point This refers to the auto fs mount point.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # servi ce auto fs status 9.4 .3. Overriding or Augment ing Sit e Configurat ion Files It can be useful to override site defaults for a specific mount point on a client system. For example, consider the following conditions: Automounter maps are stored in NIS and the /etc/nsswi tch. co nf file has the following directive: automount: files nis The auto . master file contains the following +auto.master The NIS auto .
Chapt er 9 . Net work File Syst em (NFS) This last example works as expected because auto fs does not include the contents of a file map of the same name as the one it is reading. As such, auto fs moves on to the next map source in the nsswi tch configuration. 9.4 .4 . Using LDAP t o St ore Aut omount er Maps LD AP client libraries must be installed on all systems configured to retrieve automounter maps from LD AP.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # /home, auto.master, example.com dn: automountMapName=auto.master,dc=example,dc=com objectClass: automount cn: /home automountKey: /home automountInformation: auto.home # # # # # # # extended LDIF LDAPv3 base <> with scope subtree filter: (& (objectclass=automountMap)(automountMapName=auto.home)) requesting: ALL # auto.home, example.com dn: automountMapName=auto.home,dc=example,dc=com objectClass: automountMap automountMapName: auto.
Chapt er 9 . Net work File Syst em (NFS) lo o ku p cach e= mode Specifies how the kernel should manage its cache of directory entries for a given mount point. Valid arguments for mode are al l , no ne, or po s/po si ti ve. n f svers= version Specifies which version of the NFS protocol to use, where version is 2, 3, or 4. This is useful for hosts that run multiple NFS servers. If no version is specified, NFS uses the highest version supported by the kernel and mo unt command.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide setting is sec= sys, which uses local UNIX UID s and GID s by using AUT H_SY S to authenticate NFS operations. sec= krb5 uses Kerberos V5 instead of local UNIX UID s and GID s to authenticate users. sec= krb5i uses Kerberos V5 for user authentication and performs integrity checking of NFS operations using secure checksums to prevent data tampering.
Chapt er 9 . Net work File Syst em (NFS) # servi ce nfs restart The co nd restart (conditional restart) option only starts nfs if it is currently running. This option is useful for scripts, because it does not start the daemon if it is not running. To conditionally restart the server type: # servi ce nfs co nd restart To reload the NFS server configuration file without restarting the service type: # servi ce nfs rel o ad 9.7.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide It is possible to specify multiple hosts, along with specific options for each host. To do so, list them on the same line as a space-delimited list, with each hostname followed by its respective options (in parentheses), as in: export host1(options1) host2(options2) host3(options3) For information on different methods for specifying hostnames, refer to Section 9.7.4, “ Hostname Formats” .
Chapt er 9 . Net work File Syst em (NFS) By default, access control lists (ACLs) are supported by NFS under Red Hat Enterprise Linux. To disable this feature, specify the no _acl option when exporting the file system. Each default for every exported file system must be explicitly overridden. For example, if the rw option is not specified, then the exported file system is shared as read-only.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Causes all directories listed in /etc/expo rts to be exported by constructing a new export list in /etc/l i b/nfs/etab. This option effectively refreshes the export list with any changes made to /etc/expo rts. -a Causes all directories to be exported or unexported, depending on what other options are passed to /usr/sbi n/expo rtfs. If no other options are specified, /usr/sbi n/expo rtfs exports all file systems specified in /etc/expo rts.
Chapt er 9 . Net work File Syst em (NFS) Controls which TCP and UD P port mo untd (rpc. mo untd ) uses. ST AT D _P O R T = port Controls which TCP and UD P port status (rpc. statd ) uses. LO C KD _T C P P O R T = port Controls which TCP port nl o ckmg r (l o ckd ) uses. LO C KD _UD P P O R T = port Controls which UD P port nl o ckmg r (l o ckd ) uses. If NFS fails to start, check /var/l o g /messag es. Normally, NFS will fail to start if you specify a port number that is already in use.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # mount myserver:/ /mnt/ #cd /mnt/ exports # ls exports foo bar On servers that support both NFSv4 and either NFSv2 or NFSv3, both methods will work and give the same results. Note Before Red Hat Enterprise Linux 6 on older NFS servers, depending on how they are configured, it is possible to export filesystems to NFSv4 clients at different paths. Because these servers do not enable NFSv4 by default this should not normally be a problem. 9.7.4 .
Chapt er 9 . Net work File Syst em (NFS) 2. Ensure the package that provides the nfs-rdma service is installed and the service is enabled with the following command: # yum i nstal l rd ma; chkco nfi g --l evel 34 5 nfs-rd ma o n 3. Ensure that the RD MA port is set to the preferred port (default for Red Hat Enterprise Linux 6 is 2050). To do so, edit the /etc/rd ma/rd ma. co nf file to set NFSoRD MA_LOAD =yes and NFSoRD MA_PORT to the desired port. 4.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Wildcards should be used sparingly when exporting directories through NFS, as it is possible for the scope of the wildcard to encompass more systems than intended. It is also possible to restrict access to the rpcbi nd [3] service with TCP wrappers. Creating rules with i ptabl es can also limit access to ports used by rpcbi nd , rpc. mo untd , and rpc. nfsd . For more information on securing NFS and rpcbi nd , refer to man i ptabl es. 9.8.2.
Chapt er 9 . Net work File Syst em (NFS) Another important security feature of NFSv4 is the removal of the use of the MO UNT protocol for mounting file systems. This protocol presented possible security holes because of the way that it processed file handles. 9.8.3. File Permissions Once the NFS file system is mounted read/write by a remote host, the only protection each shared file has is its permissions.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # rpci nfo -p Examp le 9 .7.
Useful Websit es Useful Websites http://linux-nfs.org — The current site for developers where project status updates can be viewed. http://nfs.sourceforge.net/ — The old home for developers which still contains a lot of useful information. http://www.citi.umich.edu/projects/nfsv4/linux/ — An NFSv4 for Linux 2.6 kernel resource. http://www.vanemery.com/Linux/NFSv4/NFSv4-no-rpcsec.html — D escribes the details of NFSv4 with Fedora Core 2, which includes the 2.6 kernel. http://citeseer.ist.psu.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 10. FS-Cache FS-Cache is a persistent local cache that can be used by file systems to take data retrieved from over the network and cache it on local disk. This helps minimize network traffic for users accessing data from a file system mounted over the network (for example, NFS). The following diagram is a high-level illustration of how FS-Cache works: Fig u re 10.1.
Chapt er 1 0 . FS- Cache To provide caching services, FS-Cache needs a cache back-end. A cache back-end is a storage driver configured to provide caching services (i.e. cachefi l es). In this case, FS-Cache requires a mounted block-based file system that supports bmap and extended attributes (e.g. ext3) as its cache back-end.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide File systems that support functionalities required by FS-Cache cache back-end include the Red Hat Enterprise Linux 6 implementations of the following file systems: ext3 (with extended attributes enabled) ext4 BTRFS XFS The host file system must support user-defined extended attributes; FS-Cache uses these attributes to store coherency maintenance information. To enable user-defined extended attributes for ext3 file systems (i.e.
Chapt er 1 0 . FS- Cache Level 1: Server details Level 2: Some mount options; security type; FSID ; uniquifier Level 3: File Handle Level 4: Page number in file To avoid coherency management problems between superblocks, all NFS superblocks that wish to cache data have unique Level 2 keys. Normally, two NFS mounts with same source volume and options will share a superblock, and thus share the caching, even if they mount different directories within that volume. Examp le 10.1.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Opening a file from a shared file system for direct I/O will automatically bypass the cache. This is because this type of access must be direct to the server. Opening a file from a shared file system for writing will not work on NFS version 2 and 3. The protocols of these versions do not provide sufficient coherency management information for the client to detect a concurrent write to the same file from another client.
Chapt er 1 0 . FS- Cache Important Culling depends on both bxxx and fxxx pairs simultaneously; they can not be treated separately. 10.5. St at ist ical Informat ion FS-Cache also keeps track of general statistical information. To view this information, use: cat /pro c/fs/fscache/stats FS-Cache statistics includes information on decision points and object counters.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Part II. Storage Administration The Storage Administration section starts with storage considerations for Red Hat Enterprise Linux 6. Instructions regarding partitions, logical volume management, and swap partitions follow this. D isk Quotas, RAID systems are next, followed by the functions of mount command, volume_key, and acls. SSD tuning, write barriers, I/O limits and diskless systems follow this.
Chapt er 1 1 . St orage Considerat ions During Inst allat ion Chapter 11. Storage Considerations During Installation Many storage device and file system settings can only be configured at install time. Other settings, such as file system type, can only be modified up to a certain point without requiring a reformat. As such, it is prudent that you plan your storage configuration accordingly before installing Red Hat Enterprise Linux 6.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide File Syst em Max Su p p o rt ed Siz e Max File O f f set Max Su b d irect o ries ( p er d irect o ry) Max D ep t h of Symb o lic Lin ks AC L Su p p o rt D et ails Ext3 16TB 2TB 32,000 8 Yes Ext4 16TB 16TB [a] Unlimited 8 Yes 100TB [c ] Unlimited 8 Yes Chapter 5, The Ext3 File System Chapter 6, The Ext4 File System Chapter 8, The XFS File System XFS 100TB [b ] [a] This maximum file s iz e is b as ed o n a 6 4-b it
Encrypt ing Block Devices Using LUKS Encrypting Block Devices Using LUKS Formatting a block device for encryption using LUKS/d m-crypt will destroy any existing formatting on that device. As such, you should decide which devices to encrypt (if any) before the new system's storage configuration is activated as part of the installation process.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide This will cause the I/O to later fail with a checksum error. This problem is common to all block device (or file system-based) buffered I/O or mmap(2) I/O, so it is not possible to work around these errors caused by overwrites. As such, block devices with D IF/D IX enabled should only be used with applications that use O _D IR EC T . Such applications should use the raw block device.
Chapt er 1 2 . File Syst em Check Chapter 12. File System Check Filesystems may be checked for consistency, and optionally repaired, with filesystem-specific userspace tools. These tools are often referred to as fsck tools, where fsck is a shortened version of file system check. Note These filesystem checkers only guarantee metadata consistency across the filesystem; they have no awareness of the actual data contained within the filesystem and are not data recovery tools.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Note Later phases of consistency checking may print extra errors as it discovers inconsistencies which would have been fixed in early phases if it were running in repair mode. O p erat e f irst o n a f ilesyst em imag e Most filesystems support the creation of a metadata image, a sparse copy of the filesystem which contains only metadata.
Chapt er 1 2 . File Syst em Check crash. If these filesystems encounter metadata inconsistencies while mounted, they will record this fact in the filesystem superblock. If e2fsck finds that a filesystem is marked with such an error e2fsck will perform a full check after replaying the journal (if present). e2fsck may ask for user input during the run if the -p option is not specified. The -p option tells e2fsck to automatically do all repairs that may be done safely.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Note Although an fsck. xfs binary is present in the xfsprogs package, this is present only to satisfy initscripts that look for an fsck. filesystem binary at boot time. fsck. xfs immediately exits with an exit code of 0. Another thing to be aware of is that older xfsprogs packages contain an xfs_check tool. This tool is very slow and does not scale well for large filesystems. As such, it has been depreciated in favor of xfs_repai r -n.
Chapt er 1 2 . File Syst em Check 6. Link count checks. 7. Freemap checks. 8. Superblock checks. These phases, as well as messages printed during operation, are documented in depth in the xfs_repai r(8) manual page. xfs_repai r is not interactive. All operations are performed automatically with no input from the user. If it is desired to create a metadata image prior to repair for diagnostic or testing purposes, the xfs_metad ump(8) and xfs_md resto re(8) utilities may be used. 12.2.3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 13. Partitions The utility parted allows users to: View the existing partition table Change the size of existing partitions Add partitions from free space or additional hard drives By default, the parted package is included when installing Red Hat Enterprise Linux. To start parted , log in as root and type the command parted /dev/sda at a shell prompt (where /dev/sda is the device name for the drive you want to configure).
Chapt er 1 3. Part it ions C o mman d D escrip t io n rm minor-num sel ect device set minor-num flag state to g g l e [NUMBER [FLAG] uni t UNIT Remove the partition Select a different device to configure Set the flag on a partition; state is either on or off Toggle the state of FLAG on partition NUMBER Set the default unit to UNIT 13.1. Viewing t he Part it ion T able After starting parted , use the command pri nt to view the partition table. A table similar to the following appears: Examp le 13.1.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide hp-ufs sun-ufs xfs If a Fi l esystem of a device shows no value, this means that its file system type is unknown. The Fl ag s column lists the flags set for the partition. Available flags are boot, root, swap, hidden, raid, lvm, or lba. Note To select a different device without having to restart parted , use the sel ect command followed by the device name (for example, /d ev/sd a).
Chapt er 1 3. Part it ions # mkpart pri mary ext3 10 24 20 4 8 Note If you use the mkpartfs command instead, the file system is created after the partition is created. However, parted does not support creating an ext3 file system. Thus, if you wish to create an ext3 file system, use mkpart and create the file system with the mkfs command as described later. The changes start taking place as soon as you press Enter, so review the command before executing to it.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide The first column should contain UUID = followed by the file system's UUID . The second column should contain the mount point for the new partition, and the next column should be the file system type (for example, ext3 or swap). If you need more information about the format, read the man page with the command man fstab. If the fourth column is the word d efaul ts, the partition is mounted at boot time.
Chapt er 1 3. Part it ions Warning D o not attempt to resize a partition on a device that is in use. Pro ced u re 13.4 . R esiz e a p art it io n 1. Before resizing a partition, boot into rescue mode (or unmount any partitions on the device and turn off any swap space on the device). 2. Start parted , where /d ev/sda is the device on which to resize the partition: # parted /d ev/sda 3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 14. LVM (Logical Volume Manager) LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. With LVM, a hard drive or set of hard drives is allocated to one or more physical volumes. LVM physical volumes can be placed on other block devices which might span two or more disks.
Chapt er 1 4 . LVM (Logical Volume Manager) Fig u re 14 .2. Lo g ical Vo lu mes On the other hand, if a system is partitioned with the ext3 file system, the hard drive is divided into partitions of defined sizes. If a partition becomes full, it is not easy to expand the size of the partition. Even if the partition is moved to another hard drive, the original hard drive space has to be reallocated as a different partition or not used.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide In the example used in this section, the following are the details for the volume group that was created during the installation: Examp le 14 .1. C reat in g a vo lu me g ro u p at in st allat io n /boot - (Ext3) file system. Displayed under 'Uninitialized Entities'. (DO NOT initialize this partition). LogVol00 - (LVM) contains the (/) directory (312 extents). LogVol02 - (LVM) contains the (/home) directory (128 extents).
Chapt er 1 4 . LVM (Logical Volume Manager) Fig u re 14 .4 . Ph ysical View Win d o w The figure below illustrates the logical view for the selected volume group. The individual logical volume sizes are also illustrated. Fig u re 14 .5. Lo g ical View Win d o w On the left side column, you can select the individual logical volumes in the volume group to view more details about each. In this example the objective is to rename the logical volume name for 'LogVol03' to 'Swap'.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Volume window from which you can modify the Logical volume name, size (in extents, gigabytes, megabytes, or kilobytes) and also use the remaining space available in a logical volume group. The figure below illustrates this. This logical volume cannot be changed in size as there is currently no free space in the volume group. If there was remaining space, this option would be enabled (see Figure 14.17, “ Edit logical volume” ).
Chapt er 1 4 . LVM (Logical Volume Manager) In this example, partition 3 will be initialized and added to an existing volume group. To initialize a partition or unpartioned space, select the partition and click on the Ini ti al i ze Enti ty button. Once initialized, a volume will be listed in the 'Unallocated Volumes' list. 14 .2.2. Adding Unallocat ed Volumes t o a Volume Group Once initialized, a volume will be listed in the 'Unallocated Volumes' list.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Fig u re 14 .7. U n allo cat ed Vo lu mes Clicking on the Ad d to Exi sti ng Vo l ume G ro up button will display a pop-up window listing the existing volume groups to which you can add the physical volume you are about to initialize. A volume group may span across one or more hard disks. Examp le 14 .3. Ad d a p h ysical vo lu me t o vo lu me g ro u p In this example only one volume group exists as illustrated below.
Chapt er 1 4 . LVM (Logical Volume Manager) select one of the existing logical volumes and increase the extents (see Section 14.2.6, “ Extending a Volume Group” ), select an existing logical volume and remove it from the volume group by clicking on the R emo ve Sel ected Lo g i cal Vo l ume(s) button. You cannot select unused space to perform this operation. The figure below illustrates the logical view of 'VolGroup00' after adding the new volume group. Fig u re 14 .8.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Fig u re 14 .9 . Lo g ical view o f vo lu me g ro u p 14 .2.3. Migrat ing Ext ent s To migrate extents from a physical volume, select the volume from the list in the left pane, highlight the desired extents in the central window, and click on the Mi g rate Sel ected Extent(s) Fro m Vo l ume button. You need to have a sufficient number of free extents to migrate extents within a volume group.
Chapt er 1 4 . LVM (Logical Volume Manager) Fig u re 14 .10. Mig rat e Ext en t s The figure below illustrates a migration of extents in progress. In this example, the extents were migrated to 'Partition 3'. Fig u re 14 .11. Mig rat in g ext en t s in p ro g ress Once the extents have been migrated, unused space is left on the physical volume. The figure below illustrates the physical and logical view for the volume group. The extents of LogVol00 which were initially in hda2 are now in hda3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Fig u re 14 .12. Lo g ical an d p h ysical view o f vo lu me g ro u p 14 .2.4 . Adding a New Hard Disk Using LVM In this example, a new ID E hard disk was added. The figure below illustrates the details for the new hard disk. From the figure below, the disk is uninitialized and not mounted. To initialize a partition, click on the Ini ti al i ze Enti ty button. For more details, see Section 14.2.1, “ Utilizing Uninitialized Entities” .
Chapt er 1 4 . LVM (Logical Volume Manager) Fig u re 14 .13. U n in it ializ ed h ard d isk 14 .2.5. Adding a New Volume Group Once initialized, LVM will add the new volume to the list of unallocated volumes where you can add it to an existing volume group or create a new volume group. You can also remove the volume from LVM. If the volume is removed from LVM, it will be added to the 'Uninitialized Entities' list, as illustrated in Figure 14.13, “ Uninitialized hard disk” . Examp le 14 .4 .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide The figure below illustrates the physical view of the new volume group. The new logical volume named 'Backups' in this volume group is also listed. Fig u re 14 .14 . Ph ysical view o f n ew vo lu me g ro u p 14 .2.6.
Chapt er 1 4 . LVM (Logical Volume Manager) In this example, the objective was to extend the new volume group to include an uninitialized entity (partition). D oing so increases the size or number of extents for the volume group. To extend the volume group, ensure that on the left pane the Physical View option is selected within the desired Volume Group. Then click on the Extend Vo l ume G ro up button. This will display the 'Extend Volume Group' window as illustrated below.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Fig u re 14 .16 . Lo g ical an d p h ysical view o f an ext en d ed vo lu me g ro u p 14 .2.7. Edit ing a Logical Volume The LVM utility allows you to select a logical volume in the volume group and modify its name, size and specify file system options. In this example, the logical volume named 'Backups" was extended onto the remaining space for the volume group.
Chapt er 1 4 . LVM (Logical Volume Manager) Fig u re 14 .17. Ed it lo g ical vo lu me If you wish to mount the volume, select the 'Mount' checkbox indicating the preferred mount point. To mount the volume when the system is rebooted, select the 'Mount when rebooted' checkbox. In this example, the new volume will be mounted in /mnt/backups. This is illustrated in the figure below.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Fig u re 14 .18. Ed it lo g ical vo lu me - sp ecif yin g mo u n t o p t io n s The figure below illustrates the logical and physical view of the volume group after the logical volume was extended to the unused space. In this example that the logical volume named 'Backups' spans across two hard disks. A volume can be stripped across two or more physical devices using LVM.
Inst alled Document at ion Fig u re 14 .19 . Ed it lo g ical vo lu me 14 .3. LVM References Use these sources to learn more about LVM. Installed Documentation rpm -q d l vm2 — This command shows all the documentation available from the l vm package, including man pages. l vm hel p — This command shows all LVM commands available. Useful Websites http://sources.redhat.com/lvm2 — LVM2 webpage, which contains an overview, link to the mailing lists, and more. http://tldp.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 15. Swap Space Swap space in Linux is used when the amount of physical memory (RAM) is full. If the system needs more memory resources and the RAM is full, inactive pages in memory are moved to the swap space. While swap space can help machines with a small amount of RAM, it should not be considered a replacement for more RAM. Swap space is located on hard drives, which have a slower access time than physical memory.
Chapt er 1 5. Swap Space existing LVM2 logical volume. It is recommended that you extend an existing logical volume. 15.1.1. Ext ending Swap on an LVM2 Logical Volume By default, Red Hat Enterprise Linux 6 uses all available space during installation. If this is the case with your system, then you must first add a new physical volume to the volume group used by the swap space. For instructions on how to do so, refer to Section 14.2.2, “ Adding Unallocated Volumes to a Volume Group” .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # swapo n -v /d ev/Vo l G ro up0 0 /Lo g Vo l 0 2 To test if the logical volume was successfully created, use cat /pro c/swaps or free to inspect the swap space. 15.1.3. Creat ing a Swap File To add a swap file: Pro ced u re 15.2. Ad d a swap f ile 1. D etermine the size of the new swap file in megabytes and multiply by 1024 to determine the number of blocks. For example, the block size of a 64 MB swap file is 65536. 2.
Chapt er 1 5. Swap Space To reduce an LVM2 swap logical volume (assuming /d ev/Vo l G ro up0 0 /Lo g Vo l 0 1 is the volume you want to reduce): Pro ced u re 15.3. R ed u cin g an LVM2 swap lo g ical vo lu me 1. D isable swapping for the associated logical volume: # swapo ff -v /d ev/Vo l G ro up0 0 /Lo g Vo l 0 1 2. Reduce the LVM2 logical volume by 512 MB: # l vred uce /d ev/Vo l G ro up0 0 /Lo g Vo l 0 1 -L -512M 3. Format the new swap space: # mkswap /d ev/Vo l G ro up0 0 /Lo g Vo l 0 1 4.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # swapo ff -v /swapfi l e 2. Remove its entry from the /etc/fstab file. 3. Remove the actual file: # rm /swapfi l e 15.3. Moving Swap Space To move swap space from one location to another, follow the steps for removing swap space, and then follow the steps for adding swap space.
Chapt er 1 6 . Disk Q uot as Chapter 16. Disk Quotas D isk space can be restricted by implementing disk quotas which alert a system administrator before a user consumes too much disk space or a partition becomes full. D isk quotas can be configured for individual users as well as user groups. This makes it possible to manage the space allocated for user-specific files (such as email) separately from the space allocated to the projects a user works on (assuming the projects are given their own groups).
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide none /sys /dev/VolGroup00/LogVol02 /home 1 2 /dev/VolGroup00/LogVol01 swap sysfs ext3 defaults 0 0 defaults,usrquota,grpquota swap defaults 0 0 . . . In this example, the /ho me file system has both user and group quotas enabled. Note The following examples assume that a separate /ho me partition was created during the installation of Red Hat Enterprise Linux.
Chapt er 1 6 . Disk Q uot as If neither the -u or -g options are specified, only the user quota file is created. If only -g is specified, only the group quota file is created.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide The first column is the name of the file system that has a quota enabled for it. The second column shows how many blocks the user is currently using. The next two columns are used to set soft and hard block limits for the user on the file system. The i no d es column shows how many inodes the user is currently using. The last two columns are used to set the soft and hard inode limits for the user on the file system.
Chapt er 1 6 . Disk Q uot as 16.1.6. Set t ing t he Grace Period for Soft Limit s If a given quota has soft limits, you can edit the grace period (i.e. the amount of time a soft limit can be exceeded) with the following command: # ed q uo ta -t This command works on quotas for inodes or blocks, for either users or groups. Important While other ed q uo ta commands operate on quotas for a particular user or group, the -t option operates on every file system with quotas enabled. 16.2.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Creating a disk usage report entails running the repq uo ta utility. Examp le 16 .5.
Chapt er 1 6 .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 17. Redundant Array of Independent Disks (RAID) The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of drives appears to the computer as a single logical storage unit or drive. RAID allows information to be spread across several disks.
Soft ware RAID Software RAID Software RAID implements the various RAID levels in the kernel disk (block device) code. It offers the cheapest possible solution, as expensive disk controller cards or hot-swap chassis [4] are not required. Software RAID also works with cheaper ID E disks as well as SCSI disks. With today's faster CPUs, Software RAID also generally outperforms Hardware RAID .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide capacity of the smallest member disk in a Hardware RAID or the capacity of smallest member partition in a Software RAID multiplied by the number of disks or partitions in the array. Level 1 RAID level 1, or " mirroring," has been used longer than any other form of RAID . Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a " mirrored" copy on each disk.
Linux Hardware RAID cont roller drivers two drives in the array. This complex parity scheme creates a significantly higher CPU burden on software RAID devices and also imposes an increased burden during write transactions. As such, level 6 is considerably more asymmetrical in performance than levels 4 and 5.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide dmraid Device-mapper RAID or d mrai d refers to device-mapper kernel code that offers the mechanism to piece disks together into a RAID set. This same kernel code does not provide any RAID configuration mechanism. d mrai d is configured entirely in user-space, making it easy to support various on-disk metadata formats. As such, d mrai d is used on a wide variety of firmware RAID implementations.
dmraid As the name suggests, d mrai d is used to manage device-mapper RAID sets. The d mrai d tool finds ATARAID devices using multiple metadata format handlers, each supporting various formats. For a complete list of supported formats, run d mrai d -l . As mentioned earlier in Section 17.3, “ Linux RAID Subsystems” , the d mrai d tool cannot configure RAID sets after creation. For more information about using d mrai d , refer to man d mrai d . 17.6.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide [5] RAID level 1 c o mes at a hig h c o s t b ec aus e yo u write the s ame info rmatio n to all o f the d is ks in the array, p ro vid es d ata reliab ility, b ut in a muc h les s s p ac e-effic ient manner than p arity b as ed RAID levels s uc h as level 5.
Chapt er 1 8 . Using t he mount Command Chapter 18. Using the mo unt Command On Linux, UNIX, and similar operating systems, file systems on different partitions and removable devices (CD s, D VD s, or USB flash drives for example) can be attached to a certain point (the mount point) in the directory tree, and then detached again. To attach or detach a file system, use the mo unt or umo unt command respectively.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide To list such mount points using the fi nd mnt command, type: ~]$ fi nd mnt -t ext4 TARGET SOURCE FSTYPE OPTIONS / /dev/sda2 ext4 rw,realtime,seclabel,barrier=1,data=ordered /boot /dev/sda1 ext4 rw,realtime,seclabel,barrier=1,data=ordered 18.2.
Chapt er 1 8 . Using t he mount Command Note: Determining the UUID and Label of a Particular Device To determine the UUID and—if the device uses it—the label of a particular device, use the bl ki d command in the following form: bl ki d device For example, to display information about /d ev/sd a3, type: ~]# bl ki d /d ev/sd a3 /dev/sda3: LABEL="home" UUID="34795a28-ca6d-4fd8-a347-73671d0c19cb" TYPE="ext3" 18.2.1.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Older USB flash drives often use the FAT file system. Assuming that such drive uses the /d ev/sd c1 device and that the /med i a/fl ashd i sk/ directory exists, mount it to this directory by typing the following at a shell prompt as ro o t: ~]# mo unt -t vfat /d ev/sd c1 /med i a/fl ashd i sk 18.2.2.
Chapt er 1 8 . Using t he mount Command 18.2.3. Sharing Mount s Occasionally, certain system administration tasks require access to the same file system from more than one place in the directory tree (for example, when preparing a chroot environment). This is possible, and Linux allows you to mount the same file system to as many directories as necessary. Additionally, the mo unt command implements the --bi nd option that provides a means for duplicating certain mounts.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide ~]# mo unt /d ev/cd ro m /med i a/cd ro m ~]# l s /med i a/cd ro m EFI GPL isolinux LiveOS ~]# l s /mnt/cd ro m EFI GPL isolinux LiveOS Similarly, it is possible to verify that any file system mounted in the /mnt directory is reflected in /med i a.
Chapt er 1 8 . Using t he mount Command ~]# l s /mnt/cd ro m EFI GPL isolinux LiveOS Also verify that file systems mounted in the /mnt directory are not reflected in /med i a. For instance, if a non-empty USB flash drive that uses the /d ev/sd c1 device is plugged in and the /mnt/fl ashd i sk/ directory is present, type: ~]# mo unt /d ev/sd c1 /mnt/fl ashd i sk ~]# l s /med i a/fl ashd i sk ~]# l s /mnt/fl ashd i sk en-US publican.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide It is also possible to verify that file systems mounted in the /mnt directory are not reflected in /med i a. For instance, if a non-empty USB flash drive that uses the /d ev/sd c1 device is plugged in and the /mnt/fl ashd i sk/ directory is present, type: ~]# mo unt /d ev/sd c1 /mnt/fl ashd i sk ~]# l s /med i a/fl ashd i sk ~]# l s /mnt/fl ashd i sk en-US publican.
Chapt er 1 8 . Using t he mount Command An NFS storage contains user directories and is already mounted in /mnt/userd i rs/. As ro o t, move this mount point to /ho me by using the following command: ~]# mo unt --mo ve /mnt/userd i rs /ho me To verify the mount point has been moved, list the content of both directories: ~]# l s /mnt/userd i rs ~]# l s /ho me jill joe 18.3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide The following resources provide an in-depth documentation on the subject. 18.4 .1. Manual Page Document at ion man 8 mo unt — The manual page for the mo unt command that provides a full documentation on its usage. man 8 umo unt — The manual page for the umo unt command that provides a full documentation on its usage. man 8 fi nd mnt — The manual page for the fi nd mnt command that provides a full documentation on its usage.
Chapt er 1 9 . T he volume_key funct ion Chapter 19. The vo l ume_key function The volume_key function provides two tools, libvolume_key and vo l ume_key. libvolume_key is a library for manipulating storage volume encryption keys and storing them separately from volumes. vo l ume_key is an associated command line tool used to extract keys and passphrases in order to restore access to an encrypted hard drive.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide This operation does not permanently alter the volume by adding a new passphrase, for example. The user can access and modify the decrypted volume, modifying volume in the process. --reencrypt, --secrets, an d --d ump These three commands perform similar functions with varying output methods. They each require the operand packet, and each opens the packet, decrypting it where necessary.
Chapt er 1 9 . T he volume_key funct ion vo l ume_key --save /path/to/volume -o escro w-packet A prompt will then appear requiring an escrow packet passphrase to protect the key. 2. Save the generated escro w-packet file, ensuring that the passphrase is not forgotten. If the volume passphrase is forgotten, use the saved escrow packet to restore access to the data. Pro ced u re 19 .2. R est o re access t o d at a wit h escro w p acket 1.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide At this point it is possible to choose an NSS database password. Each NSS database can have a different password so the designated users do not need to share a single password if a separate NSS database is used by each user. C. Run: pk12uti l -d /the/nss/directory -i the-pkcs12-file 4. D istribute the certificate to anyone installing systems or saving keys on existing systems. 5.
Chapt er 1 9 . T he volume_key funct ion After providing the NSS database password, the designated user chooses a passphrase for encrypting escro w-packet-o ut. This passphrase can be different every time and only protects the encryption keys while they are moved from the designated user to the target system. 3. Obtain the escro w-packet-o ut file and the passphrase from the designated user. 4.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 20. Access Control Lists Files and directories have permission sets for the owner of the file, the group associated with the file, and all other users for the system. However, these permission sets have limitations. For example, different permissions cannot be configured for different users. Thus, Access Control Lists (ACLs) were implemented.
Chapt er 2 0 . Access Cont rol List s 4. For users not in the user group for the file The setfacl utility sets ACLs for files and directories. Use the -m option to add or modify the ACL of a file or directory: # setfacl -m rules files Rules (rules) must be specified in the following formats. Multiple rules can be specified in the same command if they are separated by commas. u: uid: perms Sets the access ACL for a user. The user name or UID may be specified. The user may be any valid user on the system.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide To set a default ACL, add d : before the rule and specify a directory instead of a file name. Examp le 20.3. Set t in g d ef au lt AC Ls For example, to set the default ACL for the /share/ directory to read and execute for users not in the user group (an access ACL for an individual file can override it): # setfacl -m d:o:rx /share 20.4 . Ret rieving ACLs To determine the existing ACLs for a file or directory, use the g etfacl command.
Chapt er 2 0 . Access Cont rol List s addition, the -a option (equivalent to -d R --preserve= al l ) of cp also preserves ACLs during a backup along with other information such as timestamps, SELinux contexts, and the like. For more information about d ump, tar, or cp, refer to their respective man pages. The star utility is similar to the tar utility in that it can be used to generate archives of files; however, some of its options are different. Refer to Table 20.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide man acl — D escription of ACLs man g etfacl — D iscusses how to get file access control lists man setfacl — Explains how to set file access control lists man star — Explains more about the star utility and its many options 152
Chapt er 2 1 . Solid- St at e Disk Deployment G uidelines Chapter 21. Solid-State Disk Deployment Guidelines Solid-state disks (SSD ) are storage devices that use NAND flash chips to persistently store data. This sets them apart from previous generations of disks, which store data in rotating, magnetic platters.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide As of Red Hat Enterprise Linux 6.4, ext4 and XFS are the only fully-supported file systems that support d i scard . Previous versions of Red Hat Enterprise Linux 6 only ext4 fully supported d i scard . To enable d i scard commands on a device, use the mo unt option d i scard . For example, to mount /d ev/sd a2 to /mnt with d i scard enabled, run: # mount -t ext4 -o discard /dev/sda2 /mnt By default, ext4 does not issue the d i scard command.
Chapt er 2 2 . Writ e Barriers Chapter 22. Write Barriers A write barrier is a kernel mechanism used to ensure that file system metadata is correctly written and ordered on persistent storage, even when storage devices with volatile write caches lose power. File systems with write barriers enabled also ensure that data transmitted via fsync() is persistent throughout a power loss. Enabling write barriers incurs a substantial performance penalty for some applications.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide To mitigate the risk of data corruption during power loss, some storage devices use battery-backed write caches. Generally, high-end arrays and some hardware controllers use battery-backed write caches. However, because the cache's volatility is not visible to the kernel, Red Hat Enterprise Linux 6 enables write barriers by default on all supported journaling file systems.
High- End Arrays Most controllers use vendor-specific tools to query and manipulate target drives. For example, the LSI Megaraid SAS controller uses a battery-backed write cache; this type of controller requires the Meg aC l i 6 4 tool to manage target drives.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 23. Storage I/O Alignment and Size Recent enhancements to the SCSI and ATA standards allow storage devices to indicate their preferred (and in some cases, required) I/O alignment and I/O size. This information is particularly useful with newer disk drives that increase the physical sector size from 512 bytes to 4k bytes. This information may also be beneficial for RAID devices, where the chunk size and stripe size may impact performance.
sysfs Int erface Storage vendors can also supply I/O hints about the preferred minimum unit for random I/O (mi ni mum_i o _si ze) and streaming I/O (o pti mal _i o _si ze) of a device. For example, mi ni mum_i o _si ze and o pti mal _i o _si ze may correspond to a RAID device's chunk size and stripe size respectively. 23.2. Userspace Access Always take care to use properly aligned and sized I/O. This is especially important for D irect I/O access.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide BLKSSZG ET : l o g i cal _bl o ck_si ze BLKIO MIN: mi ni mum_i o _si ze BLKIO O P T : o pti mal _i o _si ze 23.3. St andards This section describes I/O standards used by ATA and SCSI devices. ATA ATA devices must report appropriate information via the ID ENT IFY D EVIC E command. ATA devices only report I/O parameters for physi cal _bl o ck_si ze, l o g i cal _bl o ck_si ze, and al i g nment_o ffset.
ut il- linux- ng's libblkid and fdisk All layers of the Linux I/O stack have been engineered to propagate the various I/O parameters up the stack. When a layer consumes an attribute or aggregates many devices, the layer must expose appropriate I/O parameters so that upper-layer devices or tools will have an accurate view of the storage as it transformed.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide determine the I/O parameters of a device for optimal placement of all partitions. The fd i sk utility will align all partitions on a 1MB boundary. parted and libparted The l i bparted library from parted also uses the I/O parameters API of l i bbl ki d . The Red Hat Enterprise Linux 6 installer (An aco n d a) uses l i bparted , which means that all partitions created by either the installer or parted will be properly aligned.
Chapt er 2 4 . Set t ing Up A Remot e Diskless Syst em Chapter 24. Setting Up A Remote Diskless System The Network Booting Service (provided by system-co nfi g -netbo o t) is no longer available in Red Hat Enterprise Linux 6. D eploying diskless systems is now possible in this release without the use of system-co nfi g -netbo o t.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide After configuring a tftp server, you need to set up a D HCP service on the same host machine. Refer to the Red Hat Enterprise Linux 6 Deployment Guide for instructions on how to set up a D HCP server. In addition, you should enable PXE booting on the D HCP server; to do this, add the following configuration to /etc/d hcp/d hcp.
Chapt er 2 4 . Set t ing Up A Remot e Diskless Syst em # cp /boot/vmlinuz-kernel-version /var/lib/tftpboot/ 3. Create the i ni trd (i.e. i ni tramfs-kernel-version. i mg ) with network support: # dracut initramfs-kernel-version.img kernel-version Copy the resulting i ni tramfs-kernel-version. i mg into the tftp boot directory as well. 4. Edit the default boot configuration to use the i ni trd and kernel inside /var/l i b/tftpbo o t.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 25. Online Storage Management It is often desirable to add, remove or re-size storage devices while the operating system is running, and without rebooting. This chapter outlines the procedures that may be used to reconfigure storage devices on Red Hat Enterprise Linux 6 host systems while the system is running. It covers iSCSI and Fibre Channel storage interconnects; other interconnect types may be added it the future.
Chapt er 2 5. O nline St orage Management po rt_name — 64-bit port name R emo t e Po rt : /sys/cl ass/fc_remo te_po rts/rpo rt-H: B-R/ po rt_i d no d e_name po rt_name d ev_l o ss_tmo — number of seconds to wait before marking a link as " bad" . Once a link is marked bad, I/O running on its corresponding path (along with any new I/O on that path) will be failed. The default d ev_l o ss_tmo value varies, depending on which driver/device is used.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Remote Port d ev_l o ss_t mo Remote Port fast_i o _fa i l _tmo Host po rt_i d Host i ssue_l i p l pfc q l a2xxx zfcp mptfc bfa X X X X X X X [a] X [b ] X X X X X X X X X [a] Sup p o rted as o f Red Hat Enterp ris e Linux 5.4 [b ] Sup p o rted as o f Red Hat Enterp ris e Linux 6 .0 25.2. iSCSI This section describes the iSCSI API and the i scsi ad m utility.
Chapt er 2 5. O nline St orage Management This command displays the session/device state, session ID (sid), some negotiated parameters, and the SCSI devices accessible through the session. For shorter output (for example, to display only the sid-to-node mapping), run: # iscsiadm -m session -P 0 or # iscsiadm -m session These commands print the list of running sessions with the format: driver [sid] target_ip:port,target_portal_group_tag proper_target_name Examp le 25.1.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide servi ce tg td start St o p p in g t h e t g t d service To stop the tgtd service, run: servi ce tg td sto p If there are open connections, use: servi ce tg td fo rce-sto p Warning Using this command will terminate all target arrays. 25.3. Persist ent Naming The operating system issues I/O to a storage device by referencing the path that is used to reach it.
Chapt er 2 5. O nline St orage Management In addition, path-based names are system-specific. This can cause unintended data changes when the device is accessed by multiple systems, such as in a cluster. For these reasons, several persistent, system-independent, methods for identifying devices have been developed. The following sections discuss these in detail. 25.3.1. WWID The World Wide Identifier (WWID ) can be used in reliably identifying devices.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide When the user_fri end l y_names feature (of d evice- map p er- mu lt ip at h ) is used, the WWID is mapped to a name of the form /d ev/mapper/mpathn. By default, this mapping is maintained in the file /etc/mul ti path/bi nd i ng s. These mpathn names are persistent as long as that file is maintained. Important If you use user_fri end l y_names, then additional steps are required to obtain consistent names in a cluster.
Chapt er 2 5. O nline St orage Management Pro ced u re 25.1. En su rin g a C lean D evice R emo val 1. Close all users of the device and backup device data as needed. 2. Use umo unt to unmount any file systems that mounted the device. 3. Remove the device from any md and LVM volume using it.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Pro ced u re 25.2. R emo vin g a Pat h t o a St o rag e D evice 1. Remove any reference to the device's path-based name, like /d ev/sd or /d ev/d i sk/bypath or the majo r: mi no r number, in applications, scripts, or utilities on the system. This is important in ensuring that different devices added in the future will not be mistaken for the current device. 2. Take the path offline using echo o ffl i ne > /sys/bl o ck/sd a/d evi ce/state.
Chapt er 2 5. O nline St orage Management Note The older form of this command, echo "scsi ad d -si ng l e-d evi ce 0 0 0 0 " > /pro c/scsi /scsi , is deprecated. a. In some Fibre Channel hardware, a newly created LUN on the RAID array may not be visible to the operating system until a Loop Initialization Protocol (LIP) operation is performed. Refer to Section 25.9, “ Scanning Storage Interconnects” for instructions on how to do this.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide another device that is already configured on the same path as the new device. This can be done with various commands, such as l sscsi , scsi _i d , mul ti path -l , and l s -l /d ev/d i sk/by-*. This information, plus the LUN number of the new device, can be used as shown above to probe and configure that path to the new device. 3.
Chapt er 2 5. O nline St orage Management Note These commands will only work if the d cbd settings for the Ethernet interface were not changed. 5. Load the FCoE device now using: # ifconfig ethX up 6. Start FCoE using: # service fcoe start The FCoE device should appear shortly, assuming all other settings on the fabric are correct.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Pro ced u re 25.5. C o n f ig u re FC o E t arg et 1. Setting up an FCoE target requires the installation of the fco e-targ et-uti l s package, along with its dependencies. # yum i nstal l fco e-targ et-uti l s 2. FCoE target support is based on the LIO kernel target and does not require a userspace daemon.
Chapt er 2 5. O nline St orage Management /> tcm_fc/ create 00:11:22:33:44:55:66:77 If FCoE interfaces are present on the system, tab-completing after create will list available interfaces. If not, ensure fco ead m -i shows active interfaces. 6. Map a backstore to the target instance. Examp le 25.7. Examp le o f map p in g a b ackst o re t o t h e t arg et in st an ce /> cd tcm_fc/00:11:22:33:44:55:66:77 /> l uns/ create /backsto res/fi l ei o /example2 7. Allow access to the LUN from an FCoE initiator.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide mount_fcoe_disks_from_fstab() { local timeout=20 local done=1 local fcoe_disks=($(egrep 'by-path\/fc-.*_netdev' /etc/fstab | cut -d ' ' -f1)) test -z $fcoe_disks & & return 0 echo -n "Waiting for fcoe disks . " while [ $timeout -gt 0 ]; do for disk in ${fcoe_disks[*]}; do if ! test -b $disk; then done=0 break fi done test $done -eq 1 & & break; sleep 1 echo -n ".
Chapt er 2 5. O nline St orage Management 25.9. Scanning St orage Int erconnect s There are several commands available that allow you to reset and/or scan one or more interconnects, potentially adding and removing multiple devices in one operation. This type of scan can be disruptive, as it can cause delays while I/O operations timeout, and remove devices unexpectedly. As such, Red Hat recommends that this type of scan be used only when necessary.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide The default iSCSI configuration file is /etc/i scsi /i scsi d . co nf. This file contains iSCSI settings used by i scsi d and i scsi ad m. D uring target discovery, the i scsi ad m tool uses the settings in /etc/i scsi /i scsi d . co nf to create two types of records: N o d e reco rd s in /var/l i b/i scsi /no d es When logging into a target, i scsi ad m uses the settings in this file.
Chapt er 2 5. O nline St orage Management $ ping -I ethX target_IP If pi ng fails, then you will not be able to bind a session to a NIC. If this is the case, check the network settings first. 25.11.1. Viewing Available iface Configurat ions From Red Hat Enterprise Linux 5.5 iSCSI offload and interface binding is supported for the following iSCSI initiator implementations: Software iSCSI — like the scsi _tcp and i b_i ser modules, this stack allocates an iSCSI host instance (i.e.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide For software iSCSI, each i face configuration must have a unique name (with less than 65 characters). The i face_name for network devices that support offloading appears in the format transport_name. hardware_name. Examp le 25.10.
Chapt er 2 5. O nline St orage Management # iscsiadm -m iface -I iface_name --op=update -n iface.setting -v hw_address Examp le 25.12. Set MAC ad d ress o f i face0 For example, to set the MAC address (hard ware_ad d ress) of i face0 to 0 0 : 0 F: 1F: 9 2: 6 B: BF, run: # iscsiadm -m iface -I iface0 --op=update -n iface.hwaddress -v 00:0F:1F:92:6B:BF Warning D o not use d efaul t or i ser as i face names. Both strings are special values used by i scsi ad m for backward compatibility.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide This behavior was implemented for compatibility reasons. To override this, use the -I iface_name to specify which portal to bind to an i face, as in: # iscsiadm -m discovery -t st -p target_IP:port -I iface_name -P 1 [7] By default, the i scsi ad m utility will not automatically bind any portals to i face configurations that use offloading. This is because such i face configurations will not have i face. transpo rt set to tcp.
Chapt er 2 5. O nline St orage Management The output will appear in the following format: target_IP:port,target_portal_group_tag proper_target_name Examp le 25.14 . U sin g i scsi ad m t o issu e a send targ ets co mman d For example, on a target with a proper_target_name of i q n. 19 9 20 8. co m. netapp: sn. 336 15311 and a target_IP:port of 10 . 15. 85. 19 : 326 0 , the output may appear as: 10.15.84.19:3260,2 iqn.1992-08.com.netapp:sn.33615311 10.15.85.19:3260,3 iqn.1992-08.com.netapp:sn.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide If your device supports multiple targets, you will need to issue a send targ ets command to the hosts to find new portals for each target. Then, rescan existing sessions to discover new logical units on existing sessions (i.e. using the --rescan option). Important The send targ ets command used to retrieve --targ etname and --po rtal values overwrites the contents of the /var/l i b/i scsi /no d es database.
Chapt er 2 5. O nline St orage Management Examp le 25.17. Fu ll i scsi ad m co mman d Using our previous example (where proper_target_name is eq ual l o g i c-i scsi 1), the full command would be: # iscsiadm --mode node --targetname \ iqn.2001-05.com.equallogic:68a0900-ac3fe0101-63aff113e344a4a2-dl585-03-1 \ --portal 10.16.41.155:3260,0 --login [10] 25.13. Logging in t o an iSCSI T arget As mentioned in Section 25.2, “ iSCSI” , the iSCSI service must be running in order to discover or log into targets.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide In most cases, fully resizing an online logical unit involves two things: resizing the logical unit itself and reflecting the size change in the corresponding multipath device (if multipathing is enabled on the system). To resize the online logical unit, start by modifying the logical unit size through the array management interface of your storage device.
Chapt er 2 5. O nline St orage Management Note You can also re-scan iSCSI logical units using the following command: # iscsiadm -m node -R -I interface Replace interface with the corresponding interface name of the resized logical unit (for example, i face0 ). This command performs two operations: It scans for new devices in the same way that the command echo "- - -" > /sys/cl ass/scsi _ho st/host/scan does (refer to Section 25.12, “ Scanning iSCSI Interconnects” ).
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide 1. D ump the device mapper table for the multipathed device using: d msetup tabl e multipath_device 2. Save the dumped device mapper table as table_name. This table will be re-loaded and edited later. 3. Examine the device mapper table. Note that the first two numbers in each line correspond to the start and end sectors of the disk, respectively. 4. Suspend the device mapper target: d msetup suspend multipath_device 5.
Chapt er 2 5. O nline St orage Management To change the R/W state, use the following procedure: Pro ced u re 25.7. C h an g e t h e R /W st at e 1. To move the device from RO to R/W, see step 2. To move the device from R/W to RO, ensure no further writes will be issued. D o this by stopping the application, or through the use of an appropriate, application-specific action.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide # mul ti path -r The mul ti path -11 command can then be used to confirm the change. 2 5 .1 4 .4 .3. Do cum e nt at io n Further information can be found in the Red Hat Knowledgebase. To access this, navigate to https://www.redhat.com/wapps/sso/login.html?redirect=https://access.redhat.com/knowledge/ and log in. Then access the article at https://access.redhat.com/kb/docs/D OC-32850. 25.15.
Known Issues Wit h rescan- scsi- bus.sh Pro ced u re 25.8. D et ermin in g T h e St at e o f a R emo t e Po rt 1. To determine the state of a remote port, run the following command: $ cat /sys/class/fc_remote_port/rport-H:B:R/port_state 2. This command will return Bl o cked when the remote port (along with devices accessed through it) are blocked. If the remote port is operating normally, the command will return O nl i ne. 3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide When d m-mul ti path is being used, the SCSI layer will fail those running commands and defer them to the multipath layer. The multipath layer then retries those commands on another path. If d mmul ti path is not being used, those commands are retried five times before failing altogether. Intervals between NOP-Out requests are 10 seconds by default. To adjust this, open /etc/i scsi /i scsi d . co nf and edit the following line: node.conn[0].
Configuring T imeout s for a Specific Session Important Whether your considerations are failover speed or security, the recommended value for repl acement_ti meo ut will depend on other factors. These factors include the network, target, and system workload. As such, it is recommended that you thoroughly test any new configurations to repl acements_ti meo ut before applying it to a mission-critical system. 25.16.3.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide The Linux SCSI layer sets a timer on each command. When this timer expires, the SCSI layer will quiesce the host bus adapter (HBA) and wait for all outstanding commands to either time out or complete. Afterwards, the SCSI layer will activate the driver's error handler. When the error handler is triggered, it attempts the following operations in order (until one successfully executes): 1. Abort the command. 2. Reset the device. 3. Reset the bus.
Device St at es Pro ced u re 25.10. Wo rkin g Aro u n d St ale Lo g ical U n it s 1. D etermine which mpath link entries in /etc/l vm/cache/. cache are specific to the stale logical unit. To do this, run the following command: $ ls -l /dev/mpath | grep stale-logical-unit Examp le 25.19 .
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Chapter 26. Device Mapper Multipathing and Virtual Storage Red Hat Enterprise Linux 6 also supports DM-Multipath and virtual storage. Both features are documented in detail in the Red Hat books DM Multipath and Virtualization Administration Guide. 26.1.
Chapt er 2 6 . Device Mapper Mult ipat hing and Virt ual St orage (the cable, switch, or controller) fails, D M-Multipath switches to an alternate path. Imp ro ved Perf o rman ce D M-Multipath can be configured in active/active mode, where I/O is spread over the paths in a round-robin fashion. In some configurations, D M-Multipath can detect loading on the I/O paths and dynamically re-balance the load.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Revision History R evisio n 2- 52 Wed Mar 25 2015 Added ext back up and restore chapters Jacq u elyn n East R evisio n 2- 51 Version for 6.
Revision Hist ory Added fsck section BZ #904902. R evisio n 2- 27 T h u Sep 05 2013 Edited Chapter 9: Network File System (NFS). Jacq u elyn n East R evisio n 2- 26 Mo n Sep 02 2013 Jacq u elyn n East Edited Chapter 5: The Ext3 File System; Chapter 6: The Ext4 File System; Chapter 7: Global File System 2; Chapter 8: The XFS File System.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide BZ #894697 Updated sections regarding FCoE. R evisio n 2- 4 Mo n Jan 14 2013 Jacq u elyn n East BZ #894891 As pNFS is coming out of tech preview status, all references to this were removed. R evisio n 2- 3 Fri O ct 19 2012 Jacq u elyn n East BZ #846498 Copy section from Performance Tuning Guide to File System Structure. R evisio n 2- 1 Fri O ct 19 2012 Branched for 6.4 Beta. Created new edition based on significant structual reordering.
Index - enabling disk quotas with, Enabling Quotas /lo cal/d irect o ry ( clien t co n f ig u rat io n , mo u n t in g ) - NFS, NFS Client Configuration /p ro c - /proc/devices, The /proc Virtual File System /proc/filesystems, The /proc Virtual File System /proc/mdstat, The /proc Virtual File System /proc/mounts, The /proc Virtual File System /proc/mounts/, The /proc Virtual File System /proc/partitions, The /proc Virtual File System /p ro c/d evices - virtual file system (/proc), The /proc Virtual File
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide ad d in g p at h s t o a st o rag e d evice, Ad d in g a St o rag e D evice o r Pat h ad d in g /remo vin g - LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsibus.
Index - I/O alignment and size, Block D evice ioctls b lo cked d evice, verif yin g - fibre channel - modifying link loss behavior, Fibre Channel b ru n ( cach e cu ll limit s set t in g s) - FS-Cache, Setting Cache Cull Limits b st o p ( cach e cu ll limit s set t in g s) - FS-Cache, Setting Cache Cull Limits b t rf s - overview, Btrfs (Technology Preview) B t rf s - Btrfs Features, Btrfs Features - Tech Preview, Btrfs B t rf s Feat u res - Btrfs, Btrfs Features C cach e b ack- en d - FS-Cache, FS-Cache
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide C h an g in g t h e read /writ e st at e - Online logical units, Changing the Read/Write State of an Online Logical Unit ch an n el co mman d wo rd ( C C W) - storage considerations during installation, D ASD and zFCP D evices on IBM System Z co h eren cy d at a - FS-Cache, FS-Cache co mman d t imer ( SC SI) - Linux SCSI layer, Command Timer co mman d s - volume_key, Commands co n f ig u rat io n - discovery - iSCSI, iSCSI D iscovery Configuratio
Index - solid state disks, Solid-State D isk D eployment Guidelines d et ermin in g remo t e p o rt st at es - fibre channel - modifying link loss behavior, Fibre Channel d ev d irect o ry, T h e /d ev/ D irect o ry d evice st at u s - Linux SCSI layer, D evice States d evice- map p er mu lt ip at h in g , D M- Mu lt ip at h d evices, remo vin g , R emo vin g a St o rag e D evice d ev_lo ss_t mo - fibre channel - modifying link loss behavior, Fibre Channel - fibre channel API, Fibre Channel API d ev_lo ss
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide d isab lin g writ e cach es - write barriers, D isabling Write Caches d isco very - iSCSI, iSCSI D iscovery Configuration d isk q u o t as, D isk Q u o t as - additional resources, D isk Quota References - assigning per file system, Setting the Grace Period for Soft Limits - assigning per group, Assigning Quotas per Group - assigning per user, Assigning Quotas per User - disabling, Enabling and D isabling - enabling, Configuring D isk Quotas, Ena
Index e2f sck, R evert in g t o an Ext 2 File Syst em e2imag e ( o t h er ext 4 f ile syst em u t ilit ies) - ext4, Other Ext4 File System Utilities e2lab el - ext4, Other Ext4 File System Utilities e2lab el ( o t h er ext 4 f ile syst em u t ilit ies) - ext4, Other Ext4 File System Utilities ecryp t f s - overview, File System Encryption (Technology Preview) eC ryp t f s - file system types, Encrypted File System mount settings for encrypted file systems, Mounting a File System as Encrypted mounting, Mo
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide ext 4 - allocation features, The Ext4 File System creating, Creating an Ext4 File System debugfs (other ext4 file system utilities), Other Ext4 File System Utilities e2image (other ext4 file system utilities), Other Ext4 File System Utilities e2label, Other Ext4 File System Utilities e2label (other ext4 file system utilities), Other Ext4 File System Utilities file system types, The Ext4 File System fsync(), The Ext4 File System main features, Th
Index f ib re- ch an n el o ver et h ern et - storage considerations during installation, Updates to Storage Configuration D uring Installation f ile syst em - FHS standard, FHS Organization - hierarchy, Overview of Filesystem Hierarchy Standard (FHS) - organization, FHS Organization - structure, File System Structure and Maintenance f ile syst em cach in g - overview, File System Caching (Technology Preview) f ile syst em en cryp t io n - overview, File System Encryption (Technology Preview) f ile syst e
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - ext4, The Ext4 File System - XFS, The XFS File System G g et f acl , R et rievin g AC Ls G FS2 - file system types, Global File System 2 - gfs2.ko, Global File System 2 - maximum size, Global File System 2 G FS2 f ile syst em maximu m siz e, G lo b al File Syst em 2 g f s2.ko - GFS2, Global File System 2 G lo b al File Syst em 2 - file system types, Global File System 2 - gfs2.
Index - tools (for partitioning and other file system functions), Partition and File System Tools - userspace access, Userspace Access I/O limit p ro cessin g - overview, I/O Limit Processing I/O p aramet ers st ackin g - I/O alignment and size, Stacking I/O Parameters I/O sch ed u ler ( t u n in g ) - solid state disks, I/O Scheduler if ace ( co n f ig u rin g f o r iSC SI o f f lo ad ) - offload and interface binding - iSCSI, Configuring an iface for iSCSI Offload if ace b in d in g /u n b in d in g - o
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - auto-partitioning and /home, Updates to Storage Configuration D uring Installation - basic path, Updates to Storage Configuration D uring Installation - channel command word (CCW), D ASD and zFCP D evices on IBM System Z - D ASD and zFCP devices on IBM System z, D ASD and zFCP D evices on IBM System Z - D IF/D IX-enabled block devices, Block D evices with D IF/D IX Enabled - fibre-channel over ethernet, Updates to Storage Configuration D uring
Index iSC SI lo g ical u n it , resiz in g , R esiz in g an iSC SI Lo g ical U n it iSC SI ro o t - iSCSI configuration, iSCSI Root issu e_lip - fibre channel API, Fibre Channel API K kn o wn issu es - adding/removing - LUN (logical unit number), Known Issues With rescan-scsi-bus.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - adding a new volume group, Adding a New Volume Group additional resources, LVM References documentation, LVM References editing a logical volume, Editing a Logical Volume explanation of, LVM (Logical Volume Manager) extending a volume group, Extending a Volume Group extents, migration of, Migrating Extents I/O alignment and size, Logical Volume Manager logical volume, LVM (Logical Volume Manager) logical volume, editing a, Editing a Logical Vo
Index - fibre channel, Fibre Channel mo u n t ( clien t co n f ig u rat io n ) - NFS, NFS Client Configuration mo u n t ( co mman d ) , U sin g t h e mo u n t C o mman d - listing mounts, Listing Currently Mounted File Systems - mounting a file system, Mounting a File System - moving a mount point, Moving a Mount Point - options, Specifying the Mount Options - shared subtrees, Sharing Mounts - private mount, Sharing Mounts - shared mount, Sharing Mounts - slave mount, Sharing Mounts - unbindable mount, Sh
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - autofs version 5, Improvements in autofs Version 5 over Version 4 - client - autofs , autofs - configuration, NFS Client Configuration - mount options, Common NFS Mount Options - condrestart, Starting and Stopping NFS - configuration with firewall, Running NFS Behind a Firewall - direct map support (autofs version 5), Improvements in autofs Version 5 over Version 4 - enhanced LD AP support (autofs version 5), Improvements in autofs Version 5 ov
Index - FS-Cache, Using the Cache With NFS n o b arrier mo u n t o p t io n - ext4, Mounting an Ext4 File System - XFS, Write Barriers N O P- O u t req u est s - modifying link loss - iSCSI configuration, NOP-Out Interval/Timeout N O P- O u t s ( d isab lin g ) - iSCSI configuration, iSCSI Root O o f f lin e st at u s - Linux SCSI layer, Controlling the SCSI Command Timer and D evice Status o f f lo ad an d in t erf ace b in d in g - iSCSI, Configuring iSCSI Offload and Interface Binding O n lin e lo g i
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide Parallel N FS - pNFS, pNFS p aramet ers f o r st o rag e access - I/O alignment and size, Parameters for Storage Access p arit y - RAID , RAID Levels and Linear Support p art ed , Part it io n s - creating partitions, Creating a Partition - overview, Partitions - removing partitions, Removing a Partition - resizing partitions, Resizing a Partition - selecting device, Viewing the Partition Table - table of commands, Partitions - viewing partition
Index - overview, I/O Limit Processing p ro ject limit s ( set t in g ) - XFS, Setting Project Limits p ro p er n sswit ch co n f ig u rat io n ( au t o f s versio n 5) , u se o f - NFS, Improvements in autofs Version 5 over Version 4 Q q u eu e_if _n o _p at h - iSCSI configuration, iSCSI Settings With dm-multipath - modifying link loss - iSCSI configuration, replacement_timeout q u o t a ( o t h er ext 4 f ile syst em u t ilit ies) - ext4, Other Ext4 File System Utilities q u o t a man ag emen t - XFS,
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - NFS, NFS over RD MA R EAD C APAC IT Y( 16 ) - I/O alignment and size, SCSI reco rd t yp es - discovery - iSCSI, iSCSI D iscovery Configuration R ed H at En t erp rise Lin u x- sp ecif ic f ile lo cat io n s - /etc/sysconfig/ , Special Red Hat Enterprise Linux File Locations - (see also sysconfig directory) - /var/cache/yum , Special Red Hat Enterprise Linux File Locations - /var/lib/rpm/ , Special Red Hat Enterprise Linux File Locations remo t
Index - LUN (logical unit number), Adding/Removing a Logical Unit Through rescan-scsi-bus.
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - storage considerations during installation, Separate Partitions for /home, /opt, /usr/local server ( clien t co n f ig u rat io n , mo u n t in g ) - NFS, NFS Client Configuration set f acl , Set t in g Access AC Ls set t in g u p a cach e - FS-Cache, Setting Up a Cache sh ared mo u n t , Sh arin g Mo u n t s sh ared su b t rees, Sh arin g Mo u n t s - private mount, Sharing Mounts - shared mount, Sharing Mounts - slave mount, Sharing Mounts -
Index - FS-Cache, Statistical Information st o rag e access p aramet ers - I/O alignment and size, Parameters for Storage Access st o rag e co n sid erat io n s d u rin g in st allat io n - advanced path, Updates to Storage Configuration D uring Installation - auto-partitioning and /home, Updates to Storage Configuration D uring Installation - basic path, Updates to Storage Configuration D uring Installation - channel command word (CCW), D ASD and zFCP D evices on IBM System Z - D ASD and zFCP devices on
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - XFS, Suspending an XFS File System sw ( mkf s.
Index t f t p service, co n f ig u rin g - diskless systems, Configuring a tftp Service for D iskless Clients t h ro u g h p u t classes - solid state disks, Solid-State D isk D eployment Guidelines t imeo u t s f o r a sp ecif ic sessio n , co n f ig u rin g - iSCSI configuration, Configuring Timeouts for a Specific Session t o o ls ( f o r p art it io n in g an d o t h er f ile syst em f u n ct io n s) - I/O alignment and size, Partition and File System Tools t rackin g st at ist ical in f o rmat io n -
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - LVM, Utilizing Uninitialized Entities U n iversally U n iq u e Id en t if ier ( U U ID ) - persistent naming, UUID and Other Persistent Identifiers u n mo u n t in g , U n mo u n t in g a File Syst em u p d at es - storage considerations during installation, Storage Considerations D uring Installation u q u o t a/u q n o en f o rce - XFS, XFS Quota Management u sersp ace access - I/O alignment and size, Userspace Access u sersp ace API f iles -
Index vo lu me_key - commands, Commands - individual user, Using volume_key as an individual user W wh at ' s n ew - storage considerations during installation, Storage Considerations D uring Installation Wo rld Wid e Id en t if ier ( WWID ) - persistent naming, WWID writ e b arriers - battery-backed write caches, Battery-Backed Write Caches - definition, Write Barriers - disabling write caches, D isabling Write Caches - enablind/disabling, Enabling/D isabling Write Barriers - error messages, Enabling/D
Red Hat Ent erprise Linux 6 St orage Administ rat ion G uide - report (xfs_quota expert mode), XFS Quota Management simple mode (xfsrestore), Simple Mode for xfsrestore su (mkfs.xfs sub-options), Creating an XFS File System suspending, Suspending an XFS File System sw (mkfs.
Index xf s_md rest o re - XFS, Other XFS File System Utilities xf s_met ad u mp - XFS, Other XFS File System Utilities xf s_q u o t a - XFS, XFS Quota Management xf s_rep air - XFS, Repairing an XFS File System 233