Specifications
Copyright © 2009, Juniper Networks, Inc. 11
DESIGN GUIDE - Data Center LAN Connectivity Design Guide
Policy and Control
Policy-based networking is a powerful concept that enables efficient management of devices in the network,
especially within virtualized configurations, and can be used to provide granular network access control. The policy
and control capabilities should allow organizations to centralize policy management while at the same time offer
distributed and even layered enforcement. The network policy and control solution should provide appropriate
levels of access control, policy creation and management, and network and service management, ensuring secure
and reliable networks for all applications. The data center network infrastructure also should easily integrate
into customers’ existing management frameworks and third-party tools such as IBM Tivoli and HP software and
also provide best-in-class centralized management, monitoring and reporting services for network services and
infrastructure.
Quality of Service (QoS)
For optimal network performance, QoS is a key requirement. QoS levels must be properly assigned and managed
to ensure satisfactory performance for various applications through the data center and across the entire LAN.
A minimum of six levels of QoS are recommended, each of the following determines a priority for application of
resources:
Gold Application Priority •
Silver Application Priority•
Bronze Application Priority•
Voice•
Video•
Control Plane•
In MPLS networks, network traffic engineering capabilities are typically deployed to allow configuration of Label
Switched Paths (LSP) with the Resource Reservation Protocol (RSVP) or LDP. This is especially critical with voice
and video deployments as QoS can mitigate latency and jitter issues by sending traffic along preferred paths, or by
enabling fast reroute in anticipation of performance problems or failures. The LAN design should allow the flexibility
to assign multiple QoS levels based upon end-to-end assessment and allow rapid and efficient management to
ensure end-to-end QoS throughout the enterprise.
High Performance
To effectively address performance requirements related to virtualization, server centralization and data center
consolidation, the data center network must offer high-capacity throughput and processing power with minimal
latency. The data center LAN also must boost the performance of all application traffic, be it local or remote. The
data center must offer a LAN-like user experience for all enterprise users regardless of their physical location. In
order to accomplish this, the data center network must enable optimization for applications, servers, storage and
network performance.
WAN optimization techniques including data compression, TCP and application protocol acceleration, bandwidth
allocation, and traffic prioritization are used to improve performance of WAN traffic. These techniques can also
be applied to data replication, backup and restoration between data centers and remote sites, including disaster
recovery sites.
Beyond WAN optimization, critical infrastructure components such as routers, switches, firewalls, remote access
platforms and other security devices must be built on non-blocking modular architecture. This ensures that they
have the performance characteristics necessary to handle the higher volumes of mixed traffic types associated with
centralization and consolidation, as well as the needs of users operating around the globe.
Juniper Network Design Approach
The network infrastructure in today’s data center is no longer sufficient to satisfy these requirements. Instead of
adding costly layers of legacy equipment and highly skilled IT resources to support the growing number of single-
function, low-density devices and services in the enterprise, a new, more integrated and consolidated data center
solution is needed. High-density, multifunction devices are needed in the new data center LAN. Such devices
can help collapse costly latency-inducing layers, increase performance, decrease logical and physical cabling
complexities, decrease choke points, decrease configuration and management tasks and increase reliability—all
while decreasing TCO as well as ongoing rack and floor space, power, and cooling costs.










