HP D2D Backup Systems Best practices for VTL, NAS and Replication implementations Table of contents Abstract .............................................................................................................................................. 4 Related products .................................................................................................................................. 4 Validity ..................................................................................................
Fibre Channel configuration............................................................. 19 Fibre Channel topologies ................................................................................................................ 19 Switched fabric .............................................................................................................................. 19 Direct Attach (Private Loop) ......................................................................................................
Seeding and why it is required ........................................................................................................ 55 Seeding methods in more detail ....................................................................................................... 57 Replication and other D2D operations .............................................................................................. 65 Replication Monitoring..............................................................................
Abstract The HP StorageWorks D2D Backup System products with Dynamic Data Deduplication are Virtual Tape library and NAS appliances designed to provide a cost-effective, consolidated backup solution for business data and fast restore of data in the event of loss. In order to get the best performance from a D2D Backup System there are some configuration best practices that can be applied. These are described in this document.
Executive summary This document contains detailed information on best practices to get good performance from an HP D2D Backup System with HP StoreOnce Deduplication Technology. HP StoreOnce Technology is designed to increase the amount of historical backup data that can be stored without increasing the disk space needed. A backup product using deduplication combines efficient disk usage with the fast single file recovery of random access disk.
NAS best practices at a glance Configure multiple shares and separate data types into their own shares. Adhere to the suggested maximum number of concurrent operations per share/appliance. Choose disk backup file sizes in backup software to meet the maximum backup size. Disable software compression, deduplication and synthetic full backups. Do not pre-allocate disk space for backup files.
HP StoreOnce Technology A basic understanding of the way that HP StoreOnce Technology works is necessary in order to understand factors that may impact performance of the overall system and to ensure optimal performance of your backup solution. HP StoreOnce Technology is an “inline” data deduplication process. It uses hash-based chunking technology, which analyzes incoming backup data in “chunks” that average up to 4K in size.
Appended backups need to “clone” the cartridge on the target side, so performance of appended tape replication will not be significantly faster than replicating the whole cartridge. If a lot of similar data exists on remote office D2D libraries, replicating these into a single target library will give a better deduplication ratio on the target D2D Backup System. Replication starts when the cartridge is unloaded or a NAS share file is closed after writing is complete, or when a replication window is enabled.
hold-off to prevent impacting the performance of other operations. It is, however, important to note that the holdoff is not binary, (i.e. on or off) so, even if backup jobs are in process, some low level of housekeeping will still take place which may have a slight impact on backup performance. Housekeeping is an important process in order to maximise the deduplication efficiency of the appliance and, as such, it is important to ensure that it has enough time to complete.
The following graph illustrates only the relationship between the number of active data streams and performance. It is not based on real data. Data compression and encryption backup application features Both software compression and encryption will randomize the source data and will, therefore, not result in a high deduplication ratio for these data sources. Consequently, performance will also suffer. The D2D appliance will compress the data at the end of deduplication processing anyway.
Network configuration All D2D appliances have two 1GBit Ethernet ports, the D2D4312 and D2D4324 appliances also have two 10GBit Ethernet ports. The Ethernet ports are used for data transfer to iSCSI VTL devices and CIFS/NFS shares and also for management access to the Web Management Interface.
Single Port mode 1Gbit ports: use this mode only if no other ports are available on the switch network or if the appliance is being used to transfer data over fibre channel ports only. On an HP D2D4312 or D2D4324 with 10Gbit ports it is possible that a single 10Gbit port will deliver good performance in most environments.
Dual Port mode Use this mode if: Servers to be backed up are split across two physical networks which need independent access to the D2D appliance. In this case virtual libraries and shares will be available on both network ports; the host configuration defines which port is used. Separate data (“Network SAN”) and management LANs are being used, i.e. each server has a port for business network traffic and another for data backup.
High availability port mode (Port failover) In this mode, no special switch configuration is required other than to ensure that both Ethernet ports in the pair from the D2D appliance are connected to the same switch. This mode sets up “bonded” network ports, where both network ports are connected to the same physical switch and behave as one network port. This mode provides some level of load balancing across the ports but generally only provides port failover.
10Gbit Ethernet ports on the 4312/4324 appliances 10Gbit Ethernet is provided as a viable alternative to the Fibre Channel interface for providing maximum VTL performance and also comparable NAS performance. When using 10Gbit Ethernet it is common to configure a “Network SAN”, which is a dedicated network for backup that is separate to the normal business data network; only backup data is transmitted over this network.
Broadly there are two possible configurations which allow both: Access to the Active Directory server for AD authentication and Separation of Corporate LAN and Network SAN traffic Option 1: HP D2D Backup System on Corporate SAN and Network SAN In this option, the D2D device has a port in the Corporate SAN which has access to the Active Directory Domain Controller. This link is then used to authenticate CIFS share access. The port(s) on the Network SAN are used to transfer the actual data.
Option 2: HP D2D Backup System on Network SAN only with Gateway In this option the D2D has connections only to the Network SAN, but there is a network router or Gateway server providing access to the Active Directory domain controller on the Corporate LAN. In order to ensure two-way communication between the Network SAN and Corporate LAN the subnet of the Network SAN should be a subnet of the Corporate LAN subnet.
Backup server networking It is important to consider the whole network when considering backup performance, any server acting as a backup server should be configured where possible with multiple network ports that are teamed / bonded in order to provide a fast connection to the LAN. Client servers (those that back up via a backup server) may be connected with only a single port, if backups are to be aggregated through the backup server.
Fibre Channel configuration Fibre Channel topologies The D2D appliances support both switched fabric and direct attach (private loop) topologies. Direct Attach (point to point) topology is not supported. Switched fabric using NPIV (N Port ID Virtualisation) offers a number of advantages and is the preferred topology for D2D appliances.
Direct Attach (Private Loop) A direct attach (private loop) topology is implemented by connecting the D2D appliance ports directly to a Host Bus Adapter (HBA). In this configuration the Fibre Channel private loop protocol must be used. Fibre Channel, direct attach (private loop) topology HP StorageWorks D2D Backup System Virtual Library 1 (e.g. D2D Generic) Medium Changer Tape Drive 1 Tape Drive 2 Virtual Library 2 (e.g.
Zoning may not always be required for configurations that are already small or simple. Typically the larger the SAN, the more zoning is needed. Use the following guidelines to determine how and when to use zoning. Small fabric (16 ports or less)—may not need zoning. Small to medium fabric (16 - 128 ports)—use host-centric zoning. Host-centric zoning is implemented by creating a specific zone for each server or host, and adding only those storage elements to be utilized by that host.
Another page on the Configuration – Fibre Channel page of the Web Management interface shows the status for all the FC devices that are configured on the D2D appliance. It lists the connection state, port ID, Port type and number of logins for each virtual library and drive connection. This page is mainly for information and is useful in troubleshooting.
A mixture of iSCSI and FC port virtual libraries and NAS shares can be configured on the same D2D appliance to balance performance needs. Sizing solutions The following diagram provides a simple sizing guide for the HP D2D Generation 2 product family for backups and backups and replication. HP D2D Backup System Gen 2 sizing guide Daily Backup – Typical amount of data that can be protected by Daily backup: typical amount of data that can be protected by HP StoreOnce Backup systems StoreOnce Backup systems.
The use of this tool enables accurate capacity sizing, retention period decisions and replication link sizing and performance for the most complex D2D environments. A fully worked example using the Sizing Tool and best practices is contained later in the document, see Appendix B.
VTL best practices Summary of best practices Tape drive emulation types have no effect on performance or functionality. Configuring multiple tape drives per library enables multi-streaming operation per library for good aggregate performance. Do not exceed the recommended maximum concurrent backup streams per library and appliance if maximum performance is required. See Appendix A. Target the backup jobs to run simultaneously across multiple drives within the library and across multiple libraries.
cartridges that can be configured per library has also increased compared to G1 products. The table below lists the key parameters for both G1 and G2 products. To achieve best performance the recommended maximum concurrent backup streams per library and appliance in the table should be followed. As an example, while it is possible to configure 40 drives per library on a 4312 appliance, for best performance no more than 12 of these drives should be actively writing or reading at any one time.
A similar limitation exists for Fibre Channel. Although there is a theoretical limit of 255 devices per FC port on a host or switch, the actual limit appears to be 128 for many switches and HBAs. You should either balance drives across FC ports or configure less than 128 drives per library. Some backup applications will deliver less than optimum performance if managing many concurrent backup tape drives/streams. Balancing the load across multiple backup application media servers can help here.
Some minor setting changes to upstream infrastructure might be required to allow backups with greater than 256 KB block size to be performed. For example, Microsoft’s iSCSI initiator implementation, by default, does not allow block sizes that are greater than 256 KB.
Overwrite versus append of media Overwriting and appending to cartridges is also a concept where virtual tape has a benefit. With physical media it is often sensible to append multiple backup jobs to a single cartridge in order to reduce media costs; the downside of this is that cartridges cannot be overwritten until the retention policy for the last backup on that cartridge has expired.
D2D NAS best practices Introduction to D2D NAS backup targets The HP StorageWorks D2D Backup System now supports the ability to create a NAS (CIFS or NFS) share to be used as a target for backup applications. The NAS shares provide data deduplication in order to make efficient use of the physical disk capacity when performing backup workloads. The D2D device is designed to be used for backup not for primary storage or general purpose NAS (drag and drop storage).
Shares and deduplication stores Each NAS share created on the D2D system has its own deduplication “store”; any data backed up to a share will be deduplicated against all of the other data in that store, there is no option to create non-deduplicating NAS shares and there is no deduplication between different shares on the same D2D. Once a D2D CIFS share is created, subdirectories can be created via Explorer.
The number of concurrently open files in the table above do not guarantee that the D2D will perform optimally with this number of concurrent backups, nor do they take into account the fact that host systems may report a file as having been closed before the actual close takes place, this means that the limits provided in the table could be exceeded without realizing it. Should the open file limit be exceeded an entry is made in the D2D Event Log so the user knows this has happened.
If a write-in-place operation does occur, the D2D will create a new backup item that is not deduplicated, a pointer to this new item is then created so that when the file is read the new write-in-place item will be accessed instead of the original data within the backup file. Backup size of data file with write-in-place item If a backup application were to perform a large amount of write-in-place operations, there would be an impact on backup performance.
Backup job time, assuming no housekeeping or replication windows are set Disk space pre-allocation Some backup applications allow the user to choose whether to “pre-allocate” the disk space for each file at creation time, i.e. as soon as a backup file is created an empty file is created of the maximum size that the backup file can reach. This is done to ensure that there is enough disk space available to write the entire backup file.
below ensures that multiple backups or streams can run concurrently whilst remaining within the concurrent file limits for each D2D share. Multiple servers, single stream backup Multiple servers, multi-stream backup Multiple servers, multiple single-stream backups The table below shows the recommended maximum number of backup streams or jobs per share to ensure that backups will not fail due to exceeding the maximum number of concurrently open files.
If backing up using application agents (e.g. Exchange, SQL, Oracle) it is recommended that only one backup per share is run concurrently because these application agents frequently open more concurrent files than standard file type backups.
Verify By default most backup applications will perform a verify pass on each back job, in which they read the backup data from the D2D and check against the original data. Due to the nature of deduplication the process of reading data is slower than writing as data needs to be rehydrated. Thus running a verify will more than double the overall backup time. If possible verify should be disabled for all backup jobs to D2D.
operations. Housekeeping remains an important part of the data deduplication solution and enough time must be allowed for it to complete in order to make best use of available storage capacity. See Housekeeping monitoring and control on page 71 for more information on best practices for applying blackout windows and how to monitor the housekeeping load.
A local user with the same username and password must be created on the media server that will be using the D2D CIFS share. The backup application services must be configured to run as the local user (how this is configured varies by backup application). The best practice when using User authentication is to create a “backup” user account on both the D2D and all application media servers. This user should then be used to log in to the media server computer and to administer the backup application.
1. Create a new Host(A) record in the forward lookup zone for the domain to which the D2D belongs with the hostname and IP address of the D2D. Click Add Host. 2. Also create a Pointer(PTR) in the reverse lookup zone for the domain for the D2D appliance by providing hostname and IP address. Click OK.
Now that the D2D is a member of the domain its shares can be managed from any computer on the domain by configuring a customized Microsoft Management Console (MMC) with the Shared Folders snap-in. Once you have created shares you can manage them as follows. 1. Open a new MMC window by typing mmc at the command prompt or from the start search box. This will launch a new empty MMC window. 2. To this empty MMC window add the Shared Folders snap-in. Select File – Add/Remove Snap-in ...
3. Now click Add > In the dialog box choose the computer to be managed and select Shares from the View options. 4. Finally select Finish and OK to complete the snap-in set up. Note that the Folder Path field contains an internal path on the D2D Backup System. 5. Save this customized snap-in for future use.
6. Double click a share name in the right-hand pane and select the Permissions tab. Add a user or group of users from the domain. Specify the level of permission that the users will receive and click Apply. Leaving an AD domain The user may wish to leave an AD domain in order to: Temporarily leave then rejoin the same domain Join a different AD Domain Put the D2D into either No Authentication or Local User Authentication modes.
VTL and NAS – Data source performance bottleneck identification In a lot of cases backup and restore performance using the HP D2D Backup System is limited by factors outside of the appliance itself. For example the speed at which data can be transferred to and from the source disk system (the system being backed up), or the performance of the Ethernet or fibre channel SAN link from source to D2D.
The activity graph below shows the start of a Virtual Tape Write and the current throughput being achieved. The activity graph below shows the end of a Virtual Tape Write and the start of a Virtual Tape Read and the throughput achieved.
The activity graph below shows the end of a Virtual Tape Read. How to use the D2D storage and deduplication ratio reporting metrics D2D appliances with software at version 1.0.0 and 2.0.0 and later provide more detailed storage reporting and deduplication ratio metrics on the Web Management Interface. These will indicate the storage and deduplication ratio for the overall appliance and on a per library and NAS share basis.
This example is from the Storage Reporting GUI page and shows the Disk Storage Capacity Growth for both User Data and Physical Data for the current week for the whole appliance as more backups have been sent to it during the week. This chart can also display this information for a month period. Deduplication Ratio and Daily and Weekly Change rate can also be selected as Data options. This example is also from the Storage Reporting GUI and looks at an individual virtual library.
D2D Replication The HP StorageWorks D2D products provide deduplication-enabled, low bandwidth replication for both VTL and NAS devices. Replication enables data on a “replication source” D2D to be replicated to a “replication target” D2D system. Replication provides a point-in-time “mirror” of the data on the source D2D at a target D2D system on another site; this enables quick recovery from a disaster that has resulted in the loss of both the original and backup versions of the data on the source site.
Replication usage models There are four main usage models for replication using D2D devices. Active/Passive – A D2D system at an alternate site is dedicated solely as a target for replication from a D2D at a primary location. Active/Active – Both D2D systems are backing up local data as well as receiving replicated data from each other. Many-to-One – A target D2D system at a data center is receiving replicated data from many other D2D systems at other locations.
Many to One configuration 50
N-way configuration In most cases D2D VTL and D2D NAS replication is the same, the only significant configuration difference being that VTL replication allows multiple source libraries to replicate into a single target library, NAS mappings however are 1:1, one replication target share may only receive data from a single replication source share. In both cases replication sources libraries or shares may only replicate into a single target.
Replication overview What to replicate D2D VTL replication allows for a subset of the cartridges within a library to be mapped for replication rather than the entire library (NAS replication does not allow this).
Appliance, library and share replication fan in/out Each D2D model has a different level of support for the number of other D2D appliances that can be involved in replication mappings with it, and also the number of libraries that may replicate into a single library on the device as follows: Max Appliance Fan out The maximum number of target appliances that a source appliance can be paired with Max Appliance Fan in The maximum number of source appliances that a target appliance can be paired with Max Li
Concurrent replication jobs Each D2D model has a different maximum number of concurrently running replication jobs when it is acting as a source or target for replication. The table below shows these values. When many items are available for replication, this is the number of jobs that will be running at any one time. As soon as one item has finished replicating another will start.
Amount of data in each backup Data change per backup (deduplication ratio) Number of D2D systems replicating Number of concurrent replication jobs from each source Number of concurrent replication jobs to each target As a general rule of thumb, however, a minimum bandwidth of 2 Mb/s per replication job should be allowed.
Summary of possible seeding methods and likely usage models Technique Seed over the WAN link Best for Active -- Passive and Many to 1 replication models with: Initial Small Volumes of Backup data OR Gradual migration of larger backup volumes/jobs to D2D over time Co-location (Seed over LAN) Active -- Passive, Active -- Active and Many to 1 replication models with significant volumes of data (> 1TB) to seed quickly and where it would simply take too long to seed using a WAN link ( > 5 days) This process c
Seeding methods in more detail Seeding over a WAN link With this seeding method the final replication set-up (mappings) can be established immediately. Active/Passive WAN seeding over the first backup is, in fact, the first wholesale replication. Active/Active WAN seeding after the first backup at each location is, in fact, the first wholesale replication in each direction.
Many to One WAN seeding over the first backup is, in fact, the first wholesale replication from the many remote sites to the Target site. Care must be taken not to run too many replications simultaneously or the Target site may become overloaded. Stagger the seeding process from each remote site.
Co-location (seed over LAN) With this seeding method it is important to define the replication set-up (mappings) in advance so that in say the Many to One example the correct mapping is established at each site the target D2D visits before the target D2D is finally shipped to the Data Center Site and the replication “re-established” for the final time. Active/Passive Co-location seeding at Source (remote) site 1. Initial backup 2. Replication over GbE link 3. Ship appliance to Data Center site 4.
Many to One Co-location seeding at Source (remote) sites; transport target D2D between remote sites. 60 1. Initial backup at each remote site 2. Replication to Target D2D over GbE at each remote site. 3. Move Target D2D between remote sites and repeat replication. 4. Finally take Target D2D to Data Center site. 5. Re-establish replication.
Floating D2D method of seeding Many to Once Seeding with Floating D2D target – for large fan-in scenarios Co-location seeding at Source (remote) sites. Transport floating target D2D between remote sites then perform replication at the Data Center site. Repeat as necessary. 1. Initial backup at each remote site. 2. Replication to floating Target D2D over GbE at each remote site. 3. Move floating Target D2D between remote sites and repeat replication. 4. Take floating Target D2D to Data Center site.
2. At each remote site perform a full system backup to the source D2D and then configure a 1:1 mapping relationship with the floating D2D device” e.g. SVTL1 on Remote Site A - FTVTL1 on floating D2D. FTVTL1 = floating target VTL1. 3. Seeding remote site A to the floating D2D will take place over the GbE link and may take several hours. 4. On the Source D2D at the remote site DELETE the replication mappings – this effectively isolates the data that is now on the floating D2D. 5.
Seeding using physical tape or portable disk drive and ISV copy utilities Many-to-one seeding using Physical Tape or portable disk drives Physical tape-based or portable disk drive seeding 1. Initial backup to D2D. 2. Copy to tape(s) or a disk using backup application software on Media Server for NAS devices; only use simple drag and drop to portable disk. 3. Ship tapes/disks to Data Center site.. 4.
2. Use the backup application software to perform a full media copy of the contents of the D2D to a physical tape or removable disk target for backup also attached to the media server. In the case of removable USB disk drives the capacity is probably limited to 2 TB, in the case of physical LTO5 media it is limited to about 3 TB per tape, but of course multiple tapes are supported if a tape library is available.
Replication and other D2D operations In order to either optimize the performance of replication or minimize the impact of replication on other D2D operations it is important to consider the complete workload being placed on the D2D. By default replication will start quickly after a backup completes; this window of time immediately after a backup may become very crowded if nothing is done to separate tasks.
A bandwidth limit calculator is supplied to assist with defining suitable limits. Source Appliance Permissions It is a good practice to use the Source Appliance Permissions functionality provided on the Replication Partner Appliances tab to prevent malicious or accidental configuration of replication mappings from unknown or unauthorized source appliances. See the D2D Backup System user guide for information on how to configure Source Appliance Permissions.
Replication Monitoring It It The aim of replication is to ensure that data is “moved” offsite as quickly as possible after a backup job completes. The “maximum time to offsite” varies depending on business requirements. The D2D Backup System provides tools to help monitor replication performance and alert system administrators if requirements are not being met.
Replication Throughput totals Whilst replication jobs are running the Status - Source/Target Active Jobs pages show some detailed performance information averaged over several minutes. The following information is provided: Source / Target jobs running: The number of replication jobs that this appliance is running concurrently.
Replication share/library details Replication share/library details show the synchronization status, throughput and disk usage for each replicated device. This allows the system administrator to identify the performance of each share individually to see the bandwidth utilization of each share and sync status. Replication File/Cartridge details Replication File/Cartridge details shows information about the last replication job to run on a specific cartridge or NAS file.
This is very useful to identify: Differences in bandwidth saving and therefore deduplication ratio for an individual cartridge or file. These can be directly correlated to backup jobs and allow the backup administrator to see the deduplication efficiency of specific data backups. Individual files or cartridges that are not being replicated. This might be because the backup application is leaving a cartridge loaded or a file open which prevents replication from starting.
Housekeeping monitoring and control Terminology Housekeeping: If data is deleted from the D2D system (e.g. a virtual cartridge is overwritten or erased), any unused chunks will be marked for removal, so space can be freed up (space reclamation). The process of removing chunks of data is not an inline operation because this would significantly impact performance. This process, termed “housekeeping”, runs on the appliance as a background operation.
By setting a housekeeping blackout window appropriately from 12:00 to 00:00 we can ensure the backups and replication run at maximum speed as can be seen below. The housekeeping is scheduled to run when the device is idle. However some tuning is required to determine how long to set the housekeeping windows and to do this we must use the D2D Web Management Interface and the reporting capabilities which we will now explain.
Overall section This section shows the combined information from both the Libraries and Shares sections. The key features within this section are: Housekeeping Statistics: Status has three options: OK if housekeeping has been idle within the last 24 hours, Warning if housekeeping has been processing nonstop for the last 24 hours, Caution if housekeeping has been processing nonstop for the last 7 days. Last Idle is the date and time when the housekeeping processing was last idle.
c) Restructure the backup regime to remove appends – as the bigger the tapes, files are allowed to grow (through appends,) the more housekeeping they generate. d) Increase the time allowed for housekeeping to run if housekeeping blackout windows are set. e) Re-schedule backup jobs to try and ensure all backup jobs complete at the same time so housekeeping starts at roughly the same time (if no housekeeping window set).
Tape Offload Terminology Direct Tape Offload This is when a physical tape library device is connected directly to the rear of the D2D Generation 1 products (D2D2503, 4004, 4009, 2502A, 2504A, 4112A - which are now obsolete) using a SAS host bus adapter. The D2D device itself manages the transfer of data from the D2D to physical tape AND the transfer is not made visible to the main backup software. Only transfer of data on VTL devices in the D2D is possible using this method.
Tape Offload/Copy from D2D versus Mirrored Backup from Data Source A summary of the supported methods is shown below.
Note: Target Offload can vary from one backup application to another in terms of import functionality. Please check with your vendor. Backup application tape offload at D2D source site 1. 2. Copy D2D to physical tape; this uses the backup Copy job to copy data from D2D to physical tape and is easy to automate and schedule, it has a slower copy performance. Mirrored backup; specific backup policy used to back up to D2D and Physical Tape simultaneously (mirrored write) at certain times (monthly).
Key performance factors in Tape Offload performance Note in the diagram below how the read performance from a D2D4312 (red line) increases with the number of read streams – just like with backup. If the D2D4312 reads with a single stream (to physical tape) the copy rate is about 370 GB/hour. However, if the copy jobs are configured to use multiple readers and multiple writers then for example with four streams being read it is possible to achieve 1.3TB/hour copy performance.
2. For “Media Copies” it is always best to try and match the D2D VTL cartridge size with the physical media cartridge size to avoid wastage. For example: if using physical LTO4 drives (800 GB tapes) then when configuring D2D Virtual Tape Libraries the D2D cartridge size should also be configured to 800 GB. 3.
Appendix A Key reference information 80
D2D Generation 2 products, software 2.1.00 D2D Gen2 Products, Software 2.1.00 D2D2502i D2D2504i D2D4106i/fc D2D4112fc D2D4312fc D2D4324fc Usable Disk Capacity (TB) 1.
D2D Generation 1 products, software 1.1.00 D2D Generation 1 Products, Software 1.1.00 D2D2502i D2D2503i D2D2504i D2D4004i/fc D2D4009i/fc D2D4112fc 1.5 2.25 3 7.5 7.
Appendix B – Fully Worked Example In this section we will work through a complete multi-site, multi-region D2D design, configuration and deployment tuning.
Backup requirements specification Remote Sites A/D NAS emulations required Server 1 – Filesystem 1, 100 GB, spread across 3 mount points Server 2 – SQL data, 100GB Server 3 – Filesystem 2, 100GB, spread across 2 mount points Server 4 –- Special App Data , 100GB Rotation Scheme – Weekly Fulls, 10% Incrementals during week, Keep 4 weeks Fulls and 1 monthly backup 12 hour backup window Remote sites B/C iSCSI VTL emulations required Server 1– Filesystem, 200 GB, spread across 2 mount points C,D Server 2 – SQL d
Using the HP StorageWorks Backup sizing tool Configure replication environment Click on Backup calculators and then Design D2D/VLS replication over WAN to get started. 1. Configure the replication environment for 4 source appliances to 1 target appliance, commonly known as Many to One replication. The replication window allowed is 12 hours; the size of the target device is initially based on capacity.
Number of parallel backup streams will determine overall throughput. The backup specification says Filesystem1 has 3 mount points which allows us to run 3 parallel backup streams. Sites A and D are identical so we can specify 2 identical sites IT IS VERY IMPORTANT that when you are creating the backup specifications in the Sizer tool you pay particular attention to the Field “Number of parallel Backup Streams”.
In the case of sites A and D, when we enter all the backup jobs, we will have seven backup jobs running in parallel which will give us best throughput and backup performance. Site A Filesystem 1 uses an Incrementals & Fulls backup scheme This parameter is the block change rate of data per day and will determine along with retention period the dedupe ratio achieved and the amount of data to be replicated. The default is 2% for dynamic change environments increase this number.
4. As you specify each job in turn click Add job and the job will be loaded to the summary table (see below).
5. Add all backup jobs for Sites A and D Please note in line with customer request, at sites A and D the D2D emulation has been selected as NAS Emulation with CIFS shares.
6. Repeat for Sites B and C.
7. Input backup job entries for Site E, which requires full backups every day for 29 days and is also required to have FC attach, so click FC in the System interface area. The rotation scheme for Site E is Fulls & Fulls.
We will retain 29 days of Fulls.
8. Press the Solve/Submit button and the Sizer will do the rest.
Sizer output The Sizer creates two outputs. It creates an excel spreadsheet with all the parts required for the solution including Service and Support, and any licenses required together with the List Pricing. It creates a solution overview (see below) which indicates the types of devices to be used at source and target, the amount of data to be replicated to and stored on target, the Link Speeds at source and target for specified replication window.
Source & Target Link sizes required Amount of data in GB transmitted Source to target worst case (fulls) The Sizer has also established that each source needs a 4.6 or 4.47 Mbit/sec link for the sources, whilst the target needs a link size of just over 9 Mbit/sec.
These are the jobs that were inputted previously. Refining the configuration In this worked example it is crucial that we have as many jobs replicating to the target simultaneously as possible. 1. Use a feature in the Sizer to force the target device to be the next model upwards – an HP D2D4106 which has an increased replication concurrency when used as a target of 24. It should also be noted that the HP D2D2502 units on sites A, B, C, D have a maximum source replication concurrency of four.
2. Click Solve/Submit again. 3. A new parts list is generated with HP D2D4106 as the target, along with an HP D2D4106 replication license for the target.
Better replication efficiency at the target means lower link speeds can be used Target can now handle maximum replication concurrency of sources with room to spare. Note how because the replication is now more efficient we only need just over 2 Mbit/sec WAN links on each of the sources. Configure D2D source devices and replication target configuration Sites A and D The customer has already told us he wants NAS emulation at sites A and D.
On sites A and D the D2D units would be configured with four NAS shares (one for each server), the filesystem servers would be configured with subdirectories for each of the mount points. These subdirectories can be created on the D2D NAS CIFS share by using Windows Explorer (e.g. Dir1, Dir2, Dir3) and the backup jobs can be configured separately as shown below, but all run in parallel.
The final total source and target configuration is shown below. Example NAS and VTL configurations Map out the interaction of backup, housekeeping and replication for sources and target With HP D2D Backup Systems it is important to understand that the device cannot do everything at once, it is best to think of “windows” of activity. Ideally, at any one time, the device should be either receiving backups, replicating or housekeeping. However this is only possible with some careful tuning and analysis.
Overlapping backups to minimize housekeeping interference Source 1 – Bad scheduling Source 2 – Good scheduling As backup DIR 1 finishes it triggers Housekeeping, which then impactsthe performance of the backup on DIR 2 If backup jobs can be scheduled to cpmplete at the same time, the impact of Housekeepong on backup performance will be geatly reduced The HP D2D Backup Systems have the ability to set blackout windows for replication, when no replication will take place – this is deliberate in order to e
Tune the solution using replication windows and housekeeping windows The objective of this section is to allow the solution architect to design for predictable behavior and performance. Predictable configurations may not always give the fastest time to complete but in the long run they will prevent unexpected performance degradation due to unexpected overlaps of activities.
Worked example – backup, replication and housekeeping overlaps Because we have sized D2D2502 for the sources - there is a limit of 4 concurrent source replication jobs at any one time. This simulation is valid for Code Versions 2.1 and above which use container matching technology and improved housekeeping performance.
Initial configuration with replication blackout window set There is improvement in some backup job performance e.g. Share 1 DIR2 & Share 2 SQL data, but replication jobs can only run 4 at a time (2502 concurrent source replication limit). Addition of replication window allows us to force replication activities to only happen outside of the backup window. Housekeeping still happens when backup is complete and cannot be stopped. Using V2.
Target initial configuration Some effort is required to map all the activity at the target, but it is clear that, between 20:00 and 02:00, the target has a very heavy load because local backups, replication jobs from sites A and D and housekeeping associated with replication jobs from sites B and C are all running at the same time. Target improved configuration Consider improving the situation by imposing two housekeeping windows on the target device as shown below.
Offload to Tape requirement In this example the customer wants to know: “What is the best practice to make monthly copies to physical tape from Site E?” One fundamental issue associated with the deduplication process used on D2D is that the data is “chunked” into nominal 4K pieces before it is stored on disk.
Avoiding bad practices The worked example describes the best practices. Typical bad practices are:Bad Practice Not using the Sizing tool Results Incorrect models chosen because of wrong throughput calculations. Replication Link sizing incorrect Insufficient backup streams configured to run in parallel Poor backup performance, poor replication performance Using single dedupe store (device) instead of separate stores (devices) for different data types.
Appendix C HP Data Protector Tape Offload – Worked Examples HP Data Protector has an extensive range of Copy processes. Here we will look at how to offload both D2D Virtual Tape Libraries and D2D NAS shares to physical tape. Similar processes to this exist for all the major backup applications. A note on terminology Media Copy – this is a byte by byte copy but can be wasteful of physical tape media as appending and consolidation is not possible.
In this example the following storage devices are configured on HP Data Protector Cell Manager “zen”: HP D2DMSL: is a Virtual Tape emulation on the D2D Backup System with 24 virtual slots (with virtual barcodes) and 1 virtual LTO5 drive. A physical MSL Tape Library configured with 2 x LTO5 drives and 24 slots with only two pieces of LTO5 media with physical barcodes loaded.
HP Data Protector has a context window for controlling Object operations as can be seen below. Full media copy e.g. 50GB on D2D virtual media copied to 800GB of LTO5 Physical media Copy process can happen immediately after backup or scheduled – for D2D scheduling copies is the preferred option. To perform a simple media copy 1. Right click on the media in the D2D Library in slot 1 and in the right-hand navigation pane select the target for the copy to be the physical library slot 1.
2. Select the default parameters for the copy. It is important for base media copies that both the primary copy and the secondary copy media are of the same format in terms of block size, etc, as many backup applications cannot reformat “on the fly”. 3. The media copy is shown as successful. To perform an interactive object copy, VTL 1. Select Objects in the left-hand navigation pane. We have chosen to copy the last backups of the server Zen.
2. Click Next and, depending on what backup objects have been selected, HP Data Protector will check that all the necessary source devices (that wrote the backups) are available. Click Next. 3. Select the target (copy to) device and the drive to be used and click Next. Here we have chosen LTO5 drive 2 on the physical MSL Tape Library. 4. You now have the option to change the protection limits on the copy and eject physical tape copy to a mailslot (if the copy is to be stored offsite).
5. Select one or more media depending on the objects that are to be copied. Select Next to display the Summary screen and click Finish to start the object copy. 6. These screens show the Object copy in progress from the D2D Backup System to the physical LTO-5 media. Read Device Write Device Device 7.
To perform an interactive object copy, D2D NAS share 1. Select Objects in the left-hand navigation pane and locate the D1D NAS share. This object was backed up to a D2D NAS share 2. Click Next. Note below the Source device is now a D2D NAS share or in Data Protector terminology a File Library. 3. Select an LTO5 drive in the HP MSL G3 Library to create the copy.
4. This shows the full path of the HP Data Protector File library and the file that represents the backup. 5. In this case the File Library was in 64K block format and needed to be re-packaged because the LTO5 block size was set to 256K as can be seen in the section underlined in red below. The copy was successful.
Appendix D Making use of improved D2D performance in 2.1.01 and 1.1.01 software Overview HP StoreOnce D2D software released in February 2011 includes significant performance stabilization updates that reduce the disk access overhead of the deduplication process and therefore improve overall system performance. However this performance improvement only applies to D2D virtual devices (NAS Shares and VTLs) created after updating to the new software.
Replication for Virtual Device Migration This method involves using two D2D Backup Systems and has the benefit that it does not require additional disk space to be available on the existing D2D Backup System to work.
3 Step 3 – Recover Data to new VTL /Share 1. Run the replication recovery wizard on the original D2D appliance, this will reverse replicate the data from the replication target device back to the new source device. 2. Wait for replication to synchronize the devices. 3. Remove the replication mapping from either the source or target D2D Web Management Interface.
Self Replication for Virtual Device Migration Self replication is the process of replicating data between two devices on the same D2D Backup System. This model requires that there is sufficient disk space on the D2D Backup System to hold two copies of the data being migrated but, with 2.1.00 and 1.1.00 software, a replication license is not required for self replication. If migrating several devices, it may be necessary to do them serially in order to preserve disk space.
1 Step 1 – Self replicate data for migration 1. Create a new VTL or Share on the D2D Backup System; this will be the new location for the migrated data. It is not possible to use the same Share or Library names as the original or use the same WWN/Serial numbers for VTL devices. 2. Add a new replication target device by providing the own IP address or FQDN (Fully Qualified Domain Name) of the D2D Backup System. 3.
Replication device self replication migration D2D A D2D B HP ProLiant DL320s HP ProLiant DL320s UID UID 1 2 1 2 1 3 Original VTL Original File Share 2 Replicated VTL Replicated File Share 2 2 New File Share New VTL 2 New File Share New VTL 1 Step 1 – Break Existing Replication Mapping Existing Replication Mapping 2 Step 2 – Replicate to new VTL/Share on same D2D 3 Step 3 – Create new Replication Mapping Self Replication New Replication Mapping Use this model if migrating devices that a
5. Remove the replication mappings on both D2D Backup Systems. 6. Remove the appliance addresses from the list of replication target appliances on both D2D Backup Systems. 7. “Connect” the backup media server to the new source device. For example: – Mount the NFS share – Discover the iSCSI VTL device and connect – Zone the FC SAN so that the host can access the new VTL 8.
Configuring Self Migration from the D2D Web Management Interface The HP StoreOnce Backup System user guide provides step by step instructions on how to configure replication mappings on the Web Management Interface. However, there are some differences when configuring Self Replication. This chapter provides a simple step by step guide to migrating a NAS share using self replication.
2. The new share has now been created and after a few seconds is online. At this point there is no user data stored in that share. 3. The next step is to begin configuring replication to migrate the data. Select Add Target Appliance from the Replication–Partner Appliances–Target Appliances page on the Web Management Interface. 4. Enter the IP address or FQDN (Fully Qualified Domain Name) of the D2D in Target Appliance Address. Note that this is the address of the local system.
5. Upon successful completion the local appliance will be added to the Target Appliances list. 6. Go to the Replication – NAS Mappings page, select the share to be replicated (i.e. the original share with backup data in it) and click Start Replication Wizard.
7. There are two main steps in the Wizard, the first is to select the target appliance from a list. This list will only contain the information about the local D2D appliance and will be highlighted already. Click Next. 8. Select the Target Share (this is the new share that was created earlier in the process and is the target for replication) and click Next.
9. After completing the wizard, replication will begin synchronizing the data between the two shares. Synchronization will take some time to complete because all data must be replicated to the new device. Once complete, the status will change to Synchronized which means that the same data is present in both shares. (Note that the size on disk may be slightly different due to reporting inaccuracy and a slight difference in deduplication ratio achieved). 10.
11. Now reconfigure the backup application to use the new D2D share as a backup target device. For example the backup application will need to retarget backups to the new share. This should be done prior to deleting the original share to ensure the migration has been successful and that the backup application can access the new share. 12.
Index 10Gbit Ethernet, 15 A Active Directory, 15 active-to-active replication, 50 active-to-passive replication, 50 activity graph, 44 AD authentication, 39 AD domain joining, 39 leaving, 43 problems joining, 39 authentication, 38 B backup application and NAS, 32 backup application considerations, 27 backup job recommendations, 29 bandwidth limiting, 66 best practices general, 5 housekeeping, 9 NAS, 6, 30 network and FC, 22 replication, 49 store migration, 118 tape offload, 79 VTL, 5, 25 blackout window, 72
O open files, 31 out of sync notifications, 68 P performance activity graph, 44 deduplication, 7 libraries per appliance, 27 maximum concurrent backup jobs, 26 maximum NAS operations, 36 metrics on GUI, 44 multi-streaming, 9 NAS, 31, 34 network, 22 replication, 7 reporting metrics on GUI, 46 performance tools, 44 Permissions domain, 43 physical tape seeding, 64 product numbers, 4 R recommended network mode, 14 reference information G1 products, 84 G2 products, 83 replication bandwidth limiting, 66 best prac
For more information To read more about the HP D2D Backup System, go to www.hp.com/go/D2D Share with colleagues © Copyright 2011-2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty.