Datasheet
Interfaces
Intel
®
Xeon
®
Processor C5500/C3500 Series
Datasheet, Volume 1 February 2010
112 Order Number: 323103-001
2.5.2 Link Layer
There are 128 Flit (Flow control unit of transfer) link layer credits to be split between
VN0 and VNA virtual channels from the IIO. One VN0 credit is used per Intel
®
QPI
message class in the normal configuration, which consumes a total of 26 Flits in the Flit
buffer. For UP systems, with the six Intel
®
QPI message classes supported, this will
leave the remaining 102 Flits to be used for VNA credits. For DP systems, the route
through VN0 traffic requires a second VN0 credit per channel to be allocated, making a
minimum of 52 Flits consumed by CPU and route through traffic, leaving 76 Flits
remaining to be split between CPU and Route Through VNA traffic. Bias register is
implemented to allow configurability of the 72 Flits split between CPU and route
through traffic. The default sharing of the VNA credits will be 36/36, but biasing
registers can be used to give more credits to either normal or route through traffic.
2.5.2.1 Link Error Protection
Error detection is done in the link layer using CRC. 8-bit CRC is supported. However,
link layer retry (LLR) is not supported and must be disabled by the BIOS.
2.5.2.2 Message Class
The link layer defines six Message Classes. The IIO supports four of those channels for
receiving and six for sending. Table 63 shows the message class details.
Arbitration for sending requests between messages classes uses a simple round robin
between classes with available credits.
2.5.2.3 Link-Level Credit Return Policy
The credit return policy requires that when a packet is removed from the link layer
receive queue, the credit for that packet/flit be returned to the sender. Credits for VNA
are tracked on a flit granularity, while VN0 credits are tracked on a packet granularity.
2.5.2.4 Ordering
The IIO link layer keeps each message class ordering independent. Credit management
is kept independent on VN0. This ensures that each message class may bypass the
other in blocking conditions.
Ordering is not assumed within a single Message Class, except for the Home Message
Class. The Home Message Class coherence conflict resolution requires ordering
between transactions corresponding to the same cache line address.
Table 63. Supported Intel
®
QPI Message Classes
Message
Class
VC Description
Send
Support
Receive
Support
SNP Snoop Channel. Used for snoop commands to caching agents. Yes No
HOM
Home Channel. Used by coherent home nodes for requests and
snoop responses to home. Channel is preallocated and guaranteed to
sink all requests and responses allowed on this channel.
Yes No
DRS
Response Channel Data. Used for responses with data and for EWB
data packets to home nodes. This channel must also be guaranteed
to sink at a receiver without dependence on other VC.
Yes Yes
NDR Response Channel Non-Data. Yes Yes
NCB Non-Coherent Bypass. Yes Yes
NCS Non-Coherent Standard. Yes Yes