White Papers

26 BP1038| Best Practices and Guidelines for Integrating the Dell EqualLogic FS7600 and FS7610 into an Existing SAN
rd=rd4,fwd=fwd4,elapsed=900,interval=30,fwdrate=max,format=no,pause=30,openflags
=fsync
rd=rd5,fwd=fwd5,elapsed=900,interval=30,fwdrate=max,format=no,pause=30,openflags
=fsync
rd=rd6,fwd=fwd6,elapsed=900,interval=30,fwdrate=max,format=no,pause=30,openflags
=fsync
rd=rd7,fwd=fwd7,elapsed=900,interval=30,fwdrate=max,format=no,pause=30,openflags
=fsync
**********************************************************************
B Appendix – Topology and configuration detail
For clients, multiple VMware ESXi 5.01 servers were configured. Three of the servers were connected to
the back-end SAN switches (and to SAN storage), while the others were connected to the front-end client
LAN switches. On each ESXi server, VMs that ran Linux were created. For both the 1 Gb and 10 Gb test
scenarios, there were 16 clients for block testing and 16 clients for file tests.
Table 4 VM configuration
VM configuration
Purpose
RAM
CPU
Operating System
Block client
2GB
1
Red Hat Enterprise Linux (RHEL) 6.3 32-bit
File client
2GB
1
Red Hat Enterprise Linux (RHEL) 6.3 32-bit
The ESXi servers consisted of three Dell PowerEdge R610 servers for the block I/O clients and three Dell
PowerEdge R710 servers for the file I/O clients. For the 1Gb (FS7600) tests, we used the onboard 1 GbE
network ports for management connectivity, and then each server also had a pair of quad-port 1 GbE
network cards. For the 10Gb (FS7610) tests, we used the onboard 1 GbE network ports for management
connectivity, and then each server also had a pair of dual-port 10 GbE network cards.
Table 5 ESXi server configuration
ESXi server configuration
Type
RAM
CPU
# cores per CPU
Operating System
Dell PowerEdge R610
24GB 2 6 vSphere ESXi 5.01
Dell PowerEdge R710
48GB
2
6
vSphere ESXi 5.01
For the 1Gb network, our LAN and SAN switches each consisted of a pair of Dell Force10 S60 switches
with stacking modules. For 10Gb, our LAN and SAN switches each consisted of a pair of Dell Force10
S4810 switches each with two 40 Gb QSFP+ ports configured as a Link Aggregation Group (LAG).