Specifications
Network
Network
l Use a 10 Gbps NIC: Flash-based storage devices from Fusion-io (or other similar products
from OCZ, LSI, etc.) are capable of writing data at speeds of HUNDREDS (750+) MB/sec or
more. A 1 Gbps NIC can only push a theoretical maximum of approximately 125 MB/sec, so
anyone taking advantage of an ioDrive’s potential can easily write data much faster than 1
Gbps network connection could replicate it. To ensure that you have sufficient bandwidth
between servers to facilitate real-time data replication, a 10 Gbps NIC should always be used
to carry replication traffic.
l Enable Jumbo Frames: Assuming that your network cards and switches support it, enabling
jumbo frames can greatly increase your network’s throughput while at the same time reducing
CPU cycles. To enable jumbo frames, perform the following configuration (example on a
RedHat/CentOS/OEL Linux distribution):
l Run the following command:
ifconfig <interface_name> mtu 9000
l To ensure change persists across reboots, add “MTU=9000” to the following file:
/etc/sysconfig/network-scripts/ifcfg-<interface_name>
l To verify end-to-end jumbo frame operation, run the following command:
ping -s 8900 -M do <IP-of-other-server>
l Change the NIC’s transmit queue length:
l Run the following command:
/sbin/ifconfig <interface_name> txqueuelen 10000
l To preserve the setting across reboots, add to /etc/rc.local.
l Change the NIC’s netdev_max_backlog:
l Set the following in /etc/sysctl.conf:
net.core.netdev_max_backlog = 100000
TCP/IP Tuning
l TCP/IP tuning that has shown to increase replication performance:
l Edit /etc/sysctl.conf and add the following parameters (Note: These are
examples and may vary according to your environment):
net.core.rmem_default = 16777216
net.core.wmem_default = 16777216
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
SteelEye Protection Suite for Linux307