Specifications
Page 38 /148
The Routing Engine maintains the routing tables and controls the routing
protocols. It consists of an Intel-based PCI platform running JUNOS software.
Another key architectural component is the Miscellaneous Control Subsystem
(MCS), which provides SONET/SDH clocking and works with the Routing Engine
to provide control and monitoring functions.
The architecture ensures industry-leading service delivery by cleanly separating
the forwarding performance from the routing performance. This separation
ensures that stress experienced by one component does not adversely affect the
performance of the other since there is no overlap of required resources. Routing
fluctuations and network instability do not limit the forwarding of packets. The use
of ASICs ensures that the forwarding table maintains a steady state, which is
particularly beneficial during times of netw ork instability.
SFM
Internet
Processor II
Distributed
Buffer Managers
PIC
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
PIC
SFM
Internet
Processor II
Distributed
Buffer Managers
PIC
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
FPC
I/O
Mgr
I/O
Mgr
I/O
Mgr
I/O
Mgr
PD
out
PD
in
PIC
Leading-edge ASICs
The feature-rich M160 ASICs deliver a comprehensive hardware-based system
for route lookups, filtering, sampling, load balancing, buffer management,
switching, encapsulation, and de-encapsulation functions. To ensure a non-
blocking forwarding path, all channels between the ASICs are oversized,
dedicated paths.
Internet Processor II ASICs
Each of the four Internet Processor II ASICs (one per SFM) support a lookup rate
of over 40 Mpps (for a routing table with 80,000 entries). With over one million
gates, the Internet Processor II ASIC delivers high-speed forwarding performance
with advanced services, such as filtering and sampling, enabled. It is the largest,
fastest, and most advanced ASIC ever implemented on a router platform and
deployed in the Internet.
Distributed Buffer Manager ASICs
The Distributed Buffer Manager ASICs allocate incoming data packets throughout
shared memory on the FPCs. This single-stage buffering improves performance
by requiring only one write to and one read from shared memory. There are no
extraneous steps of copying packets from input buffers to output buffers. The
shared memory is completely nonblocking, which in turn, prevents head-of-line
blocking.
Packet Director ASICs
The Packet Director ASICs balance and distribute packet loads across the four
I/O Manager ASICs per FPC. Since each SFM represents 40 Mpps of lookup and
40 Gbps of throughput, and since the Packet Director ASICs balance traffic
across the I/O Manager ASICs before it is forwarded to the SFM, the aggregate
throughput is 160 Gbps.
I/O Manager ASICs
The I/O Manager ASICs support wire-rate packet parsing, packet prioritizing, and
queuing. Each I/O Manager ASIC divides the packets, stores them in shared
memory (managed by the Distributed Buffer Manager), and re-assembles the
packets for transmission.