Release Notes

Mixed-speed environments: Integrating 1GbE and 10GbE SANs
56 Dell PS Series Configuration Guide
10 Mixed-speed environments: Integrating 1GbE and 10GbE
SANs
With the introduction of 10GbE, some situations that require 1Gb arrays and 10Gb arrays coexisting in the
same SAN infrastructure. PS Series arrays support operation of 1Gb and 10Gb arrays within the same group.
This section summarizes mixed speed SAN design guidelines that are presented in the following publications:
PS Series Architecture MPIO with Devices Having Unequal Link Speeds
Deploying a Mixed 1Gb/10 Gb Ethernet SAN using Dell Storage PS Series Arrays
The potential advantages in running a mixed-speed (1GbE and 10GbE) PS Series SAN include:
Not all of the application workloads on a SAN will require storage I/O performance that the 10Gb
arrays provide. Thus, SAN administrators will have additional storage tiering flexibility based on array
I/O performance.
The PS Series Group Manager will allow the SAN administrator to still manage both types of arrays
within the same SAN group.
The ability to mix 1Gb and 10Gb arrays supports seamless operational coexistence during migration
to a 10Gb SAN.
10.1 Mixed-speed SAN best practices
The following list summarizes the important SAN design considerations for integrating 10Gb PS Series arrays
into existing 1Gb PS Series SANs.
When integrating 10Gb switches into existing 1Gb switching environment, how the mixed-speed
switches are interconnected (split or straight uplink) does not have a significant impact on
performance as long as the uplinks are sized appropriately to your workloads.
- If 1Gb switches are configured as a stack then use the split interconnect pattern as described in
Deploying a Mixed 1Gb/10Gb Ethernet SAN using Dell Storage PS Series Arrays.
- If 1Gb switches are not stacked, then use the straight interconnect pattern
When connecting 1Gb switches and 10Gb switches together, always be aware of where Rapid
Spanning Tree is going to block links to make sure that 10Gb traffic (i.e. PS Series inter-array data
flow) never crosses the 1Gb switch.
Always configure pools and volumes in a way that minimizes impact to I/O performance.
- Where possible, connect 1Gb hosts only to 1Gb arrays and 10Gb hosts only to 10Gb arrays
(except when performing migration tasks). Intermixing speeds may cause oversubscription of
ports and lead to high latency or high retransmits.
- When adding 10Gb arrays, place them in separate pools from 1Gb arrays.