Intel® IXP400 Software Programmer’s Guide April 2005 Document Number: 252539, Revision: 007
Intel® IXP400 Software INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER INTELLECTUAL PROPERTY RIGHT.
Intel® IXP400 Software Contents Contents 1 Introduction.................................................................................................................................. 19 1.1 1.2 1.3 1.4 1.5 1.6 1.7 2 Software Architecture Overview ................................................................................................ 27 2.1 2.2 2.3 2.4 2.5 2.6 2.7 2.8 2.9 2.10 3 High-Level Overview..............................................................................................
Intel® IXP400 Software Contents 4.5.1 4.5.2 4.5.3 4.5.4 4.5.5 5 Access-Layer Components: ATM Manager (IxAtmm) API ....................................................................................................... 71 5.1 5.2 5.3 5.4 5.5 5.6 5.7 5.8 5.9 5.10 5.11 6 Scheduled Transmission ....................................................................................... 58 4.5.1.1 Schedule Table Description ...................................................................
Intel® IXP400 Software Contents 7 Access-Layer Components: Security (IxCryptoAcc) API ......................................................................................................... 87 7.1 7.2 7.3 7.4 7.5 7.6 7.7 8 What’s New......................................................................................................................... 87 Overview.............................................................................................................................
Intel® IXP400 Software Contents 8.7 8.8 8.9 8.10 8.11 8.12 9 8.6.1 IxDmaAccDescriptorManager.............................................................................. 118 Parameters Description .................................................................................................... 118 8.7.1 Source Address ................................................................................................... 119 8.7.2 Destination Address.......................................................
Intel® IXP400 Software Contents 9.9 10 Access-Layer Components: Ethernet Database (IxEthDB) API.............................................................................................155 10.1 10.2 10.3 10.4 11 Management Information ..................................................................................................152 Overview...........................................................................................................................155 What’s New.....................
Intel® IXP400 Software Contents 12 Access-Layer Components: Feature Control (IxFeatureCtrl) API ......................................................................................... 183 12.1 12.2 12.3 12.4 12.5 12.6 13 Access-Layer Components: HSS-Access (IxHssAcc) API..................................................................................................... 189 13.1 13.2 13.3 13.4 13.5 13.6 13.7 14 What’s New............................................................................
Intel® IXP400 Software Contents 15 Access-Layer Components: NPE Message Handler (IxNpeMh) API .....................................................................................225 15.1 15.2 15.3 15.4 15.5 15.6 15.7 15.8 16 Access-Layer Components: Parity Error Notifier (IxParityENAcc) API ................................................................................233 16.1 16.2 16.3 16.4 17 What’s New.............................................................................................
Intel® IXP400 Software Contents 17.9 Threading.......................................................................................................................... 252 17.10 Using the API.................................................................................................................... 252 17.10.1 API Usage for Intel XScale® Core PMU .............................................................. 253 17.10.1.1 Event and Clock Counting ...................................................
Intel® IXP400 Software Contents 20.4 21 Access-Layer Components: UART-Access (IxUARTAcc) API ...............................................................................................293 21.1 21.2 21.3 21.4 21.5 22 22.4 22.5 22.6 What’s New.......................................................................................................................297 Overview...........................................................................................................................
Intel® IXP400 Software Contents 24.5 24.6 24.7 24.8 25 ADSL Driver ............................................................................................................................... 327 25.1 25.2 25.3 25.4 25.5 25.6 26 What’s New....................................................................................................................... 327 Device Support .................................................................................................................
Intel® IXP400 Software Contents 27.5 27.4.3 Silicon Endianness Controls ................................................................................349 27.4.3.1 Hardware Switches ..............................................................................349 27.4.3.2 Intel XScale® Core Endianness Mode .................................................350 27.4.3.3 Little-Endian Data Coherence Enable/Disable.....................................351 27.4.3.4 MMU P-Attribute Bit ..........................
Intel® IXP400 Software Contents 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 Basic IxCryptoAcc API Flow ....................................................................................................... 90 IxCryptoAcc API Call Process Flow for CCD Updates ............................................................... 92 IxCryptoAcc Component Dependencies...........................................
Intel® IXP400 Software Contents 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 Data Abort with No Parity Error ................................................................................................243 Parity Error with No Data Abort ................................................................................................243 Data Abort followed by Unrelated Parity Error Notification ..
Intel® IXP400 Software Contents 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 IX_MBUF Field Details ............................................................................................................... 45 IX_MBUF to M_BLK Mapping .................................................................................................... 47 Buffer Translation Functions...................................................
Intel® IXP400 Software Contents 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 Default IEEE 1588 Hardware Assist Block States upon Hardware/Software Reset.................287 IN, OUT, and SETUP Token Packet Format ............................................................................298 SOF Token Packet Format .......................................................................................................298 Data Packet Format...................................................
Intel® IXP400 Software Contents Revision History Date Revision Description Updated guide for IXP400 Software Version 2.0.
Intel® IXP400 Software 1 Introduction This chapter contains important information to help you learn about and use the Intel® IXP400 Software v2.0 release. 1.1 Versions Supported by this Document This programmer’s guide is intended to be used in conjunction with software release 2.0. Always refer to the accompanying release notes for information about the latest information regarding the proper documentation sources to be used.
Intel® IXP400 Software Introduction 1.4 How to Use this Document This programmer’s guide is organized as follows: Chapters Description Chapters 1 and 2 Introduces the Intel® IXP400 Software v2.0 and the supported processors, including an overview of the software architecture. Chapters 4 through 22 Provide functional descriptions of the various access-layer components.
Intel® IXP400 Software Introduction The IXP4XX product line and IXC1100 control plane processors have a unique distributed processing architecture that features the performance of the Intel XScale® Core and up to three Network Processor Engines (NPEs). The combination of the four high-performance processors provides tremendous processing power and enables wire-speed performance at both the LAN and WAN ports.
Intel® IXP400 Software Introduction Document Title Document # IEEE Standard for a Precision Clock Synchronization Protocol for Networked Measurement and Control Systems (IEEE Std. 1588™ - 2002) 1.7 ARM Ltd., AMBA Specification, Rev. 2.0, May 1999 – http://www.pcisig.com/reflector/msg01668.html, a discussion on a PCI bridge between little and big endian devices.
Intel® IXP400 Software Introduction Acronym Description CPU Central Processing Unit CRC Cyclic Redundancy Check CSR Customer Software Release CTR Counter Mode DDR Double Data Rate DES Data Encryption Standard DMT Discrete Multi-Tone DOI Domain of Interpretation DSL Digital Subscriber Line DSP Digital Signal Processor E Empty E1 Euro 1 trunk line (2.
Intel® IXP400 Software Introduction Acronym April 2005 24 Description HSS High Speed Serial HSSI High Speed Serial Interface HW Hardware IAD Integrated Access Device ICV Integrity Check Value IKE Internet Key Exchange IMA Inverse Multiplexing over ATM IP Internet Protocol IPsec Internet Protocol Security IRQ Interrupt Request ISA Industry Standard Architecture ISR Interrupt Service Routine ISR Interrupt Sub-Routine IV Initialization Vector IX_OSAL_MBUF BSD 4.
Intel® IXP400 Software Introduction Acronym Description MSB Most Significant Bit MVIP Multi-Vendor Integration Protocol MxU Multi-dwelling Unit NAK Not-Acknowledge Packet NAPT Network Address Port Translation NAT Network Address Translation NE Nearly Empty NF Nearly Full NOTE Not Empty NOTF Not Full NOTNE Not Nearly Empty NOTNF Not Nearly Full NPE Network Processing Engine OC3 Optical Carrier - 3 OF Overflow OFB Output FeedBack OS Operating System OSAL Operating System
Intel® IXP400 Software Introduction Acronym April 2005 26 Description SIP Session Initiation Protocol SNMP Simple Network Management Protocol SOF Start of Frame SPHY Single PHY SSL Secure Socket Layer SSP Synchronous Serial Port SVC Switched Virtual Connection SWCP Switching Coprocessor TCD Target Controller Driver TCI Transmission Control Interface TCP Transmission Control Protocol TDM Time Division Multiplexing TLB Translation Lookaside Buffer TLS Transport Level Security T
Intel® IXP400 Software 2 Software Architecture Overview 2.1 High-Level Overview The primary design principles of the Intel® IXP400 Software v2.0 architecture are to enable the supported processors’ hardware in a manner which allows maximum flexibility. Intel® IXP400 Software v2.0 consists of a collection of software components specific to the IXP4XX product line and IXC1100 control plane processors and their supported development and reference boards.
Intel® IXP400 Software Software Architecture Overview Figure 1. Intel® IXP400 Software v2.0 Architecture Block Diagram Intel ® IXP4XX Network Processor ® Intel XScale Core Customer Application Codelets Operating System Board Support Package Parity Ethernet ATM DMA TimeSync Perf Prof HSS Crypto OSAL Drivers OSSL Ethernet Access Layer ADSL IxAtmdAcc IxCryptoAcc IxDmaAcc IxEthAcc IxHssAcc IxQmgr IxNpeDl IxParityENAcc IxTimeSyncAcc IxSspAcc IxNpeMh ix...
Intel® IXP400 Software Software Architecture Overview 2.3 Operating System Support The Intel XScale microarchitecture offers a broad range of tools together with support for two widely adopted operating systems. The software release 2.0 supports VxWorks* and the standard Linux* 2.4 kernel. MontaVista* software will provide the support for Linux. Support for other operating systems may be available. For further information, visit the following Internet site: http://developer.intel.
Intel® IXP400 Software Software Architecture Overview 2.6 Release Directory Structure The software release 2.0 includes the following directory structure: \---ixp_osal +---doc (API References in HTML and PDF format) +---include +---os +---src \---ixp400_xscale_sw +---buildUtils (setting environment vars.
Intel® IXP400 Software Software Architecture Overview +---cryptoAcc (for crypto version only) +---dmaAcc +---ethAcc | \---include +---ethDB | \---include +---ethMii +---featureCtrl +---hssAcc | \---include +---i2c +---include (header location for top-level public modules) +---npeDl | \---include +---npeMh | \---include +---osLinux (Linux specific operations for loading NPE microcode) +---osServices (v1.4 backwards compatibility) +---ossl (v1.
Intel® IXP400 Software Software Architecture Overview 2.7 Threading and Locking Policy The software release 2.0 access-layer does not implement processes or threads. The architecture assumes execution within a preemptive multi-tasking environment with the existence of multipleclient threads and uses common, real-time OS functions — such as semaphores, task locking, and interrupt control — to protect critical data and procedure sequencing.
Intel® IXP400 Software Software Architecture Overview 2.10 Global Dependency Chart Figure 2 shows the interdependencies for the major APIs discussed in this document. Figure 2. Global Dependencies EthAcc DmaAcc HssAcc EthDB CryptoAcc EthMii Atmm QMgr AtmdAcc NpeDl NpeMh AtmSch SspAcc FeatureCtrl TimeSyncAcc ParityENAcc IxOSAL I2CDrv Adsl PerfProfAcc Usb UartAcc B2922- 03 Programmer’s Guide IXP400 Software Version 2.
Intel® IXP400 Software This page is intentionally left blank. April 2005 34 IXP400 Software Version 2.
Intel® IXP400 Software Buffer Management 3 This chapter describes the data buffer system used in Intel® IXP400 Software v2.0, and includes definitions of the IXP400 software internal memory buffers, cache management strategies, and other related information. 3.1 What’s New There are no changes or enhancements to this component in software release 2.0. 3.2 Overview Buffer management is the general principle of how and where network data buffers are allocated and freed in the entire system.
Intel® IXP400 Software Buffer Management Figure 3.
Intel® IXP400 Software Buffer Management Figure 4.
Intel® IXP400 Software Buffer Management 3.3 IXP_BUF Structure As shown in Figure 5, IXP_BUF is comprised of the following three main structures, and each structure is comprised of eight entries four bytes long. 1. The first structure consists of an eight word fields some of which are between the OS driver / API users and the access-layer components. 2. The second structure consists of internal fields used by the pool manager, which is provided by the OSAL component. 3.
Intel® IXP400 Software Buffer Management Figure 6.
Intel® IXP400 Software Buffer Management Figure 7. API User Interface to IXP_BUF IXP_BUF API USER IX_MBUF Data, len … IX_OSAL_MBUF_XXX macros (data, length …) same fields across all APIs) Reserved for pool management and extra fields API User (e.g. driver) IX_ETHACC_NE_XXX service-specific macros (e.g. flags) ix_ne: NPE shared structure (service specific) B-3828 The Figure 8 shows a typical interface between the Intel® IXP400 Software access-layer components and the IXP_BUF fields.
Intel® IXP400 Software Buffer Management Figure 9 below shows the interface between the OSAL pool management module and the pool management fields used for pool maintenance. The pool management field also stores the os_buf_ptr field, which is used by the access-layer to retrieve the original pointer to the OS buffer and is set at the time of pool allocation. Figure 9.
Intel® IXP400 Software Buffer Management Linux utilizes memory structures called skbuffs. The user allocates IXP_BUF and sets the data payload pointer to the skbuff payload pointer. An os_buf_ptr field inside the ixp_ctrl structure (defined below) of the IXP_BUF is used to save the actual skbuff pointer. In this manner, the OS buffers are not freed directly by the IXP400 software. The IXP400 software IXP_BUF to skbuff mapping is a ‘zero-copy’ implementation.
Intel® IXP400 Software Buffer Management Figure 12. IXP_BUF: NPE Shared Structure ix p _ n e x t ix p _ le n ix p _ p k t_ le n ix p _ d a ta N P E S e rv ic e S p e c ific F ie ld N P E S e rv ic e S p e c ific F ie ld N P E S e rv ic e S p e c ific F ie ld N P E S e rv ic e S p e c ific F ie ld N P E S e rv ic e S p e c ific F ie ld ix _ n e : 3 rd S tru c tu re o f IX _ B U F (N P E S h a re d s tru c tu re ) 3.
Intel® IXP400 Software Buffer Management Figure 13.
Intel® IXP400 Software Buffer Management Table 1. Internal IX_MBUF Field Format (Sheet 2 of 2) 0 1 2 20 ix_rsvd 24 ix_pktlen 28 ix_priv(Reserved) 3 A set of macros are provided for the IXP400 software to access each of the fields in the buffer structure. Each macro takes a single parameter – a pointer to the buffer itself. Each macro returns the value stored in the field. More detail on the field, their usage, and the macros are detailed in the table below. Note: Table 2.
Intel® IXP400 Software Buffer Management Table 2. IX_MBUF Field Details (Sheet 2 of 2) Field / MACRO Purpose Used by Access-Layer? IX_OSAL_MBUF_FLAGS Parameter type: IX_MBUF * Return type: unsigned char Buffer flags. Yes, by some components. Reserved field, used to preserve 32bit word alignment. No. 32-bit pointer to the parent pool of the buffer Yes, by some components. Total length (octets) of the data sections of all buffers in a chain of buffers (packet).
Intel® IXP400 Software Buffer Management Note that the M_BLK structure contains many fields that are not used by the IXP400 software. These fields are simply ignored and are not modified by the IXP400 software. M_BLK buffers support two levels of buffer chaining: • buffer chaining — Each buffer can be chained together to form a packet. This is achieved using the IX_MBUF_NEXT_BUFFER_IN_PKT_PTR equivalent field in the M_BLK. This is supported and required by the IXP400 software.
Intel® IXP400 Software Buffer Management It works on the following principles: • Each IXP_BUF is mapped to an skbuff (1:1 mapping) • The os_buf_ptr field of the ix_ctrl structure is used to store a pointer to the corresponding skbuff. • The ix_data pointer field of the IX_MBUF structure within the IXP_BUF structure will be set to point to the data field of the corresponding skbuff through use of the IX_OSAL_MBUF_MDATA macro.
Intel® IXP400 Software Buffer Management 3.7 Caching Strategy The general caching strategy in the IXP400 software architecture is that the software (include Intel XScale core-based code and NPE microcode) only concerns itself with the parts of a buffer which it modifies. For all other parts of the buffer, the user (higher-level software) is entirely responsible. IXP_BUF buffers typically contain a header section and a data section.
Intel® IXP400 Software Buffer Management Tx Cache Flushing Example In the case of an Ethernet bridging system, only the user can determine that it is not necessary to flush any part of the packet payload. In a routing environment, the stack can determine that only the beginning of the mbuf may need to be flushed (for example, if the TTL field of the IP header is changed). Additionally, with the VxWorks OS, mbufs can be from cached memory or uncached memory.
Intel® IXP400 Software Buffer Management After the NPE modifies the memory, ensure that the Intel XScale core MMU cache is up-to-date by invalidating cached copies of any parts of the buffer memory that the Intel XScale core will need to read. It is more robust to invalidate before the NPE gets a chance to write to the SDRAM. OS-independent macros are provided for both flushing (IX_ACC_DATA_CACHE_FLUSH) and invalidating (IX_ACC_DATA_CACHE_INVALIDATE).
Intel® IXP400 Software This page is intentionally left blank. April 2005 52 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API 4 This chapter describes the Intel® IXP400 Software v2.0’s “ATM Driver-Access” access-layer component. 4.1 What’s New There are no changes or enhancements to this component in software release 2.0. 4.2 Overview The ATM access-driver component is the IxAtmdAcc software component and provides a unified interface to AAL transmit and receive hardware. The software release 2.0 supports AAL 5, AAL 0, and OAM.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API • Support AAL-0-52 PDU transmission service, which accepts PDUs containing an integral number of 52-byte cells for transmission on a particular port and VC. (PDUs may consist of single or chained IXP_BUFs.) • Supports OAM PDU transmission service, which accepts PDUs containing an integral number of 52-byte OAM cells for transmission on a particular port independent of the VC. (PDUs may consist of single or chained IXP_BUFs.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API These statistics include the number of cells received, the number of cells receive with an incorrect cell size, the number of cells containing parity errors, the number of cells containing HEC errors, and the number of idle cells received.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API its own identifier known as a scheduler VcId. This callback also serves to allow the scheduling entity to acknowledge the presence of VC. • Function to submit a cell count to the scheduling entity on a per-VC basis. This function is used every time the user submits a new PDU for transmission. • Function to clear the cell count related to a particular VC.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API • • • • • Check for ATM VC already in use in an other Rx connection. Check if the service type is OAM and, if so, check that the VC is the dedicated OAM-VC. Register the callback by which received buffers get pushed into the client’s protocol stack. Register the notification callback by which the hardware will ask for more available buffers. Allocate a connection ID and return it to the client.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API — if the overall user application involves a port configured with a VC supporting a very different traffic rate. This tuning is at the client’s discretion and, therefore, is beyond the scope of this document. In the case of OAM, a PDU containing OAM cells for any port, VPI, or VCI must be submitted for transmission on the dedicated OAM-VC for that port.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API this VC. In making this callback, ixAtmdAcc is also providing the AtmScheduler VC identifier that should be used when calling IxAtmdAcc for this VC. 4. The shaping entity acknowledges the validity of the VC, stores the IxAtmdAcc connection ID and issues a VcId to IxAtmdAcc. 5.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Figure 15.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Processing primarily involves handing back ownership of buffers to clients. The rate at which this is done must be sufficient to ensure that client-buffer starvation does not occur. The details of the exact rate at which this must be done is implementation-dependent and not within the scope of this document.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Transmit Done — Based on Polling Mechanism A polling mechanism can be used instead of the threshold service to trigger the recycling of the transmitted buffers, as shown in Figure 17. Figure 17.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Figure 18. Tx Disconnect Tx Ctrl Client Data Client 4: ixAtmdAccBufferReturnCB(userId, mbuf) 2: IX_ATMDACC_RESOURCES_STILL_ALLOCATED 6: IX_SUCCESS AtmdAcc 1: ixAtmdAccTxDisconnect() 5: ixAtmdAccTxDisconnect() 3: hwSend() B2288-01 1. The data client sends the last PDUs and the control client wants to disconnect the VC. IxAtmdAccTxVcDisconnect() invalidates further attempts to transmit more PDUs.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API In order to receive a PDU, the client layer must allocate IXP_BUFs and pass their ownership to the IxAtmdAcc component. This process is known as replenishment. Such buffers are filled out with cell payload. Complete PDUs are passed to the client. In the case of AAL 5, an indication about the validity of the PDU — and the validity of the AAL-5 CRC — is passed to the client.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Figure 19. Rx Using a Threshold Level Data Client Rx Ctrl Client 4: rxCallback(userId, IX_VALID_PDU,mbuf) 3: ixAtmdAccRxDispatch(stream) AtmdAcc 1: ixAtmdAccRxCallbackRegister(stream, mbufThreshold, callback) 2: hwReceive() B2289-01 1. A control client wants to use the threshold services to process the received PDUs. The ixAtmdAccRxThresholdSet() function is called to register a callback.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Received — Based on a Polling Mechanism A polling mechanism can also be used to collect received buffers as shown in Figure 20. Figure 20. RX Using a Polling Mechanism Data Client Rx Ctrl Client 5: rxCallBack(userId, IX_VALID_PDU, mbuf) 6: mbufProcessed 3: mbufLevel AtmdAcc 2: ixAtmdAccRxLevelQuery(stream) 4: ixAtmdAccRxDispatch(stream, numMbuf) 1: hwReceive() B2290-01 1.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Figure 21. Rx Disconnect Data Client Tx Ctrl Client 3: rxCallback(userId,IX_ BUFFER_RETURN, mbuf) 2: IX_ATMDACC_RESOURCES_STILL_ALLOCATED 5: IX_SUCCESS AtmdAcc 1: ixAtmdAccRxDisconnect() 4: ixAtmdAccRxDisconnect() B2291-01 1,2. The control client wants to disconnect the VC.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API The IXP_BUF fields required for transmission are described in Table 5. These fields will not be changed during the Tx process. Table 5. IXP_BUF Fields Required for Transmission Field Description ix_next Required. When IXP_BUFs are chained to build a PDU. In the last IXP_BUF of a PDU, this field value has to be 0. ix_nextpkt ix_data Not used. Required. This field should point to the part of PDU data. ix_len Required.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API Table 7. IXP_BUF Fields Modified During Reception (Sheet 2 of 2) Fields 4.5.4.3 Description ix_flags Not used. ix_reserved Not used. pkt.rcvif Not used. pkt.len Not used. Buffer-Size Constraints Any IXP_BUF size can be transmitted, but a full PDU must be a multiple of a cell size (48/ 52 bytes, depending on AAL type). Similarly, the system can receive and chain IXP_BUFs that are a multiple of a cell size.
Intel® IXP400 Software Access-Layer Components: ATM Driver Access (IxAtmdAcc) API 4.5.5.2 Real-Time Errors Errors may occur during real-time traffic. Table 8 shows the different possible errors and the way to resolve them. Table 8. Real-Time Errors Cause Rx-free queue underflow Consequences and Side Effects • System is not able to store the inbound traffic, which gets dropped.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API 5 This chapter describes the Intel® IXP400 Software v2.0’s “ATM Manager API” access-layer component. IxAtmm is an example IXP400 software component. The phrase “Atmm” stands for “ATM Management.” The chapter describes the following details of ixAtmm: • • • • • 5.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API IxAtmm assumes that the client will supply initial upstream port rates once the capacity of each port is established. • Ensuring traffic shaping is performed for each registered port. IxAtmm acts as transmission control for a port by ensuring cell demand is communicated to the IxAtmSch ATM Scheduler from IxAtmdAcc and cell transmission schedules produced by IxAtmSch are supplied at a sufficient rate to IxAtmdAcc component.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API 5.5 ATM-Port Management Service Model IxAtmm can be considered an “ATM-port management authority.” It does not directly perform data movement, although it does control the ordering of cell transmission through the supply of ATM cell-scheduling information to the lower levels.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API Figure 22. Services Provided by Ixatmm IXP4XX/IXC1100 System Initialization 1. ATM Clients 2.* 3.*.* ATMM IxAtmSch UTOPIA-2 Interface ATM PORT ATM PORT ATM PORT B2292-01 Figure 22 shows the main services provided by the IxAtmm component.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API Further calls to IxAtmDAcc must be made by the client following registration with IxAtmm to fully enable data traffic on a VC. IxAtmm does not support the registration of Virtual Path Connections (VPCs). Registration and traffic shaping is performed by IxAtmm and IxAtmSch on the VC/VCC level only. 5.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API information to the IxAtmDAcc component, as required to drive the transmit function. As a result, all data buffers in the system — once configured — will pass directly through IxAtmdAcc to the appropriate clients. No data traffic will pass through the IxAtmm component at any stage. Figure 23.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API 5.7 Dependencies Figure 24. Component Dependencies of IxAtmm IxAtmm IxAtmSch IAtmDAcc B2294-01 IxAtmm configures the IXP4XX product line and IXC1100 control plane processors’ UTOPIA Level-2 device through an interface provided by the IxAtmdAcc component.
Intel® IXP400 Software Access-Layer Components: ATM Manager (IxAtmm) API 5.11 Performance The IxAtmm does not operate on the data path of the IXP4XX product line and IXC1100 control plane processors. Because it is primarily concerned with registration and deregistration of port and VC data, IxAtmm is typically executed during system initialization. April 2005 78 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API 6 This chapter describes the Intel® IXP400 Software v2.0’s “ATM Transmit Scheduler” (IxAtmSch) access-layer component. 6.1 What’s New There are no changes or enhancements to this component in software release 2.0. 6.2 Overview IxAtmSch is an “example” software release 2.0 component, an ATM scheduler component supporting ATM transmit services on IXP4XX product line and IXC1100 control plane processors.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API • Schedule table to the ATM transmit function that will contain information for ATM cell scheduling and shaping IxAtmSch implements a fully operational ATM traffic scheduler for use in the processor’s ATM software stack. It is possible (within the complete IXP400 software architecture) to replace this scheduler with one of a different design.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API 6.4 Connection Admission Control (CAC) Function IxAtmSch makes outbound virtual connection admission decisions based a simple ATM port reference model. Only one parameter is needed to establish the model: outbound (upstream) port rate R, in terms of (53 bytes) ATM cells per second. IxAtmSch assumes that the “real-world” ATM port is a continuous pipe that draws the ATM cells at the constant cell rate.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API 6.5 Scheduling and Traffic Shaping Figure 25. Multiple VCs for Each Port, Multiplexed onto Single Line by the ATM Scheduler VCs submit demand for transmit of ATM cells. VC 1 Port 1 VC 2 Port 2 IxAtmSch component determines when to schedule each cell on the physical port. Cells are queued for transmission on each port based on this schedule table, such that all traffic contracts are fulfilled.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API The schedule table is composed of an array of table entries, each of which specifies a VC ID and a number of cells to transmit from that VC. The scheduler explicitly inserts idle cells into the table, where necessary, to fulfill the traffic contract of the VCs registered in the system. Idle cells are inserted in the table with the VC identifier set to 0. The exact format of the schedule table is defined in IxAtmTypes.h.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API The client calls the VC queue update interface whenever the user of the VC submits cells for transmission. The structure of the VC queue update interface is compatible with the requirements of the IxAtmdAcc component. The client calls the schedule-table-update interface whenever it needs a new table. Internally, IxAtmSch maintains a transmit queue for each VC.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API Some function interfaces supplied by the IXP400 software component adhere to structure requirements specified by the IxAtmdAcc component. However, no explicit dependency exists between the IxAtmSch component and the IxAtmdAcc component. 6.7 Error Handling IxAtmSch returns an error type to the user when the client is expected to handle the error.
Intel® IXP400 Software Access-Layer Components: ATM Transmit Scheduler (IxAtmSch) API 6.9.1 Latency The transmit latency introduced by the IxAtmSch component into the overall transmit path of the processor will be zero under normal operating conditions. This is due to the fact that — when traffic is queued for transmission — scheduling will be performed in advance of the cell slots on the physical line becoming available to transmit the cells that are queued. April 2005 86 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API 7 This chapter describes the Intel® IXP400 Software v2.0’s “Security API” IxCryptoAcc accesslayer component. The Security Hardware Accelerator access component (IxCryptoAcc) provides support for authentication and encryption/decryption services needed in cyrptographic applications, such as IPSec authentication and encryption services, SSL or WEP.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API — ECB — CBC — CTR (for AES algorithm only) — Single-Pass AES-CCM encryption and security for 802.11i. • Authentication algorithms: — HMAC-SHA1 (512-bit data block size, from 20-byte to 64-byte key size) — HMAC-MD5 (512-bit data block size, from 16-byte to 64-byte key size) — SHA1/MD5 (basic hashing functionality) — WEP ICV generation and verification using the 802.11 WEP standard 32-bit CRC polynomial.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API The Intel XScale core WEP Engine is a software-based “engine” for performing ARC4 and WEP ICV calculations used by WEP clients. While this differs from the model of NPE-based hardware acceleration typically found in the IXP400 software, it provides additionally design flexibility for products that require NPE A to perform non-crypto operations.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 27.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API The context-registration process creates the structures within the CCD, but the crypto context for each connection must be previously defined in an IxCryptoAccCtx structure. The IxCryptoAccCtx structure contains the following information: • The type of operation for this context. For example, encrypt, decrypt, authenticate, encrypt and authenticate, etc.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 28. IxCryptoAcc API Call Process Flow for CCD Updates 1. IxNpeDlNpeInitAndStart (ImageID) IPSec or W EP Client 2. IxCryptoAccConfig () 8. IxCryptoRegisterCompleteCallback (cryptoContextId, mBuf *, IxCryptoAccStatus) 3. IxCryptoAccInit () create IxCryptoAccCtx, create mBufs 4.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API 8. IxCryptoAcc will return a context Id to the client application upon successful context registration, and will call the Register Complete callback function. 7.3.4 Buffer and Queue Management The IX_OSAL_MBUF buffer format is for use between the IxCryptoAcc access component and the client. All buffers used between the IxCryptoAcc access component and clients are allocated and freed by the clients.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Table 11. IxCryptoAcc Data Memory Usage (Sheet 2 of 2) Structure Size in Bytes Number of Crypto Context (IX_CRYPTO_ACC_MAX_ACTIVE_SA_TUNNELS) 1,000 Total Memory Allocated for Crypto Contexts 152 * 1000= Size of KeyCryptoParam Structures 152,000 256 Total memory allocated for KeyCryptoParam Structures Total Memory Allocated by IxCryptoAcc 7.3.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API • IxCryptoAcc depends on the IxQMgr component to configure and use the hardware queues to access the NPE. • OS Abstraction Layer access-component is used for error handling and reporting, IX_OSAL_MBUF handling, endianness handling, mutex handling, and for memory allocation.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API ixCryptoAccCtxCipherKeyUpdate() This function is called to change the key value of a previously registered context. Key change for a registered context is only supported for CCM cipher mode. This is done in order to quickly change keys for CCM mode, without going through the process of context deregistration and registration. Changes to the key lengths are not allowed for a registered context.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 30. IxCryptoAcc, NPE and IPSec Stack Scope Policy Database SA Database Management Crypto Context Database Original IP Packet IPSec'd Packet Policy Lookup SA Lookup Packet Processing Cryptographic Protection IP Fragmentation Hardware Accelerator (NPE) Scope Client IPSec’s scope B2313-02 The IPSec protocol stack provides security for the transported packets by encrypting and authenticating the IP payload.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 31. Relationship Between IPSec Protocol and Algorithms ESP AH Encryption Algorithm Authentication Algorithm B2307-02 7.4.2 IPSec Packet Formats IPSec standards have defined packet formats. The authentication header (AH) provides data integrity and the encapsulating security payload (ESP) provides confidentiality and data integrity. In conjunction with SHA1 and MD5 algorithms, both AH and ESP provide data integrity.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API In AH mode, the ICV value is part of the authentication header. AH is embedded in the data to be protected. This results in AH being included for ICV calculation, which means the authentication data field (ICV value) must be cleared before executing the ICV calculation.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 34. ESP Data Flow Plain text Application IPSec Client ESP Header Plain Text ESP Trailer ESP Header Cipher Text ESP Trailer ESP Auth ESP Trailer ESP Header Cipher Text ESP Trailer ESP Auth ESP Trailer ESP Header Cipher Text ESP Trailer ESP Auth Encrypt & Authenticate Req (SA_ID, ...) Access Component / Intel XScale® Core ESP Header Plain Text Encrypt & Authenticate Req (SA_ID, ...
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 35. AH Data Flow IP Header Application IPSec Client IP Header Note : IP mutable fields are handled by IPSec client Authenticate Req (SA_ID, ...) Access Component / Intel XScale ® Core IP Header AH AH payload payload IP Header AH Auth Data payload payload IP Header AH Auth Data payload Authenticate Req (SA_ID, ...
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 36. IPSec API Call Flow 1. (...NPE init, CryptoAccConfig, CryptoAccInit, CryptoAccCtxRegister, etc...) create data m Bufs, IV IPSec C lient 6. IxCryptoPerform Com pleteCallback (cryptoContextId, m Buf *, IxCryptoAccStatus) 2.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API 4. The NPE will read the descriptor on the Crypto Ready Queue and performs the encryption/ decryption/authentication operations, as defined in the CCD for the submitted crypto context. The NPE inserts the Integrity Checksum Value (ICV) for a forward-authentication operation and verifies the ICV for a reverse-authentication operation. 5. The NPE writes the resulting data to the destination IX_OSAL_MBUF in SDRAM.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API 2. Use AES-CTR mode to encrypt the payload with counter values 1, 2, 3, … 3. Use AES-CTR mode to encrypt the MIC with counter value 0 (First key stream (S0) from AES-CTR operation) Figure 37. CCM Operation Flow E ... E ... E padding padding B0 B1 … Bk 0 Bk+1 … Br 0 Payload Header A1 E Sm ... S1 ...
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API 2. Register another crypto context for AES-CTR encryption (cipher context). A crypto context ID (B) will also be obtained in this operation. This crypto context is used for payload and MIC encryption only. 3. After both crypto context registration for both contexts is complete, call the crypto perform API using context ID A. The IV for this packet is inserted as first block of message in the packet.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API AES-CBC operation into the packet, between header and payload. The payload needs to be moved in order to hold MIC in the packet. An efficient method of doing this could be to split the header and payload into two different IX_MBUFs. Then the MIC can be inserted after the header into the header IX_MBUF for the AES CTR encryption operation. 7.4.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 41. WEP Frame with Request Parameters m D ataptr icvptr F ram eH eader IVH eader startO ffset F ram eB ody IC V F C S dataLen icvO ffset B 2919-01 • *pSrcMbuf — a pointer to IX_MBUF, which contains data to be processed. This IX_MBUF structure is allocated by client.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API These acceleration components provide the following services to IxCryptoAcc: • ARC4 (Alleged RC4) encryption / decryption • WEP ICV generation and verification The API provides two functions for performing WEP operations. ixCryptoAccXScaleWepPerform() is used to submit data for WEP services using the Intel XScale core-based WEP engine.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API Figure 42. WEP Perform API Call Flow 1. (...NPE init, CryptoAccConf ig, CryptoA ccInit, CryptoA ccCtxRegister, etc...) create data mBufs, IV W EP Client 6. ixCryptoPerf ormCompleteCallback (cryptoContextId, mBuf *, IxCryptoA ccStatus) 2. ixCryptoA cc*WepPerf orm (cryptoCtxId, *pSrcMbuf , *pDestMbuf , startOf f set, dataLen, icvOf f set, *pKey) IxCryptoAcc SDRAM WEP Complete Queue WEP Request Queue IxQMgr / AQM 3. 5. 4.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API 4. The NPE will read the descriptor on the Crypto Request Queue and performs the encryption/ decryption/authentication operations, as defined in the CCD for the submitted crypto context. The NPE will also insert or verify the WEP ICV integrity check value. 5. The NPE writes the resulting data to the destination IX_MBUF in SDRAM.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API The ixCryptoAccAuthCryptPerform() functionality described in “IPSec Services” on page 96 offers capabilities to perform encrypt /decrypt AND authentication calculations in one submission for IPSec style clients only. This “single-pass” method does not work for SSL and TLS clients. SSL and TLS clients must register two contexts; one for encryption/decryption only and the other for authentication create / verify. 7.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API SSL client applications can make use of the ARC4 processing features by registering an encryption-only or decryption-only crypto context and the IxCryptoAccXScaleWepPerform() or IxCryptoAccNpeWepPerform() functions. SSL clients should supply a full 128-bit key to the API. 7.7.2 Cipher Modes There are four cipher modes supported by the NPE: • • • • 7.7.2.
Intel® IXP400 Software Access-Layer Components: Security (IxCryptoAcc) API The hardware accelerator component provides an interface for performing a single pass CCMPMIC computation and verification with CTR mode encryption /decryption. Note: The implementation of AES-CCM mode in IxCryptoAcc is designed to support 802.11i type applications specifically. As noted below, the API expects a 48-byte Initialization Vector and an 8-byte MIC value. These values correspond with an 802.11i AES-CCM implementation.
Intel® IXP400 Software This page is intentionally left blank. April 2005 114 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API 8 This chapter describes the Intel® IXP400 Software v2.0’s “DMA Access Driver” access-layer component. 8.1 What’s New There are no changes or enhancements to this component in software release 2.0. 8.2 Overview The IxDmaAcc provides DMA capability to offload large data transfers between peripherals in the IXP4XX product line and IXC1100 control plane processors memory map from the Intel XScale core.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API • IxDmaAcc has no knowledge on the devices that involve in the DMA transfer. The client is responsible for ensuring the devices are initialized and configured correctly before request for DMA transfer. 8.5 Dependencies Figure 43 shows the functional dependencies of IxDmaAcc component.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API Figure 44.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API The ixDmaAcc component consists of three APIs: • PUBLIC IX_STATUS ixDmaAccInit (IxNpeDlNpeId npeId) This function initializes the DMA Access component internals.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API 8.7.1 Source Address Source address is a valid IXP4XX product line and IXC1100 control plane processors memory map address that points to the first word of the data to be read. The client is responsible to check the validity of the source address because the access layer and NPE do not have information on the IXP4XX product line and IXC1100 control plane processors’ memory map. 8.7.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API 8.7.5 Addressing Modes Addressing mode describes the types of source and destination addresses to be accessed. Two addressing modes are supported: • Incremental Address — Address increments after each access, and is normally used to address a contiguous block of memory (i.e., SDRAM). • Fixed Address — Address remains the same for all access, and is normally used to operate on FIFO-like devices (i.e., UART). 8.7.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API 8.7.7 Supported Modes This section summarizes the transfer modes supported by the IxDmaAcc. Some of the supported modes have restrictions. For details on restrictions, see “Restrictions of the DMA Transfer” on page 127. Table 14.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API Table 15.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API Table 16. DMA Modes Supported for Addressing Mode of Fixed Source Address and Incremental Destination Address 8.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API Upon completion of the DMA transfer, the NPE writes a message to the AQM-done queue. The AQM dispatcher then calls the ixDmaAcc callback and the access layer calls the client callback. Figure 45 shows the overall flow of the DMA transfer operation between the client, the access layer, and the NPE. Figure 45. IxDmaAcc Control Flow 1. ixDmaAccInit 2. ixDmaAccDmaTransfer 3. ixQMgrQWrite 4. IX_STATUS 7. ixDmaAccDoneCallback 8.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API Figure 46. IxDMAcc Initialization Client IxDmaAcc IxQMg ixOsal IxDmaAccDescriptorManager 1. ixDmaAccInit 2. Init Config 3. ixDmaAccDescriptorPoolInit IX_SUCCESS 4. ixQMgrConfig 5. IxQMgrNotificationCallbackSet 6. ixOsalMutexInit IX_DMA_SUCCESS B2360-02 1. Client calls ixDmaAccInit to initialize the IxDmaAcc component with an NPE ID as a parameter.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API Figure 47. DMA Transfer Operation Client IxDmaAcc IxQMgr ixOsal IxDmaAccDesMgr* NPE HWAcc 0. Init and config 1. ixDmaAccDmaTransfer 2. ixDmaAccValidateParams 3.1 ixOsalMutexLock 3.2 ixDmaAccDescriptorGet 3.3 ixOsalMutexUnlock IX_STATUS 4. ixQMgrQWrite IX_DMA_SUCCESS 5.1 ReadRequestQueue 5.2 WriteDoneQueue 6. ixQMgrCallback 7. ixQMgrQRead 8.1 ixOsalMutexLock 8.2 ixDmaAccDescriptorFree 8.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API 8. The descriptor pool needs to be guarded by mutual exclusion because there are two contexts that access the pool descriptor buffer (see Step 3). 9. IxDmaAccCallback frees the descriptor. The descriptor pool needs to be guarded by mutual exclusion (see Step 3). 10. IxDmaAccCallback calls client registered callback. 11. Client releases the resources allocated in Step 0. 8.
Intel® IXP400 Software Access-Layer Components: DMA Access Driver (IxDmaAcc) API • Burst mode is not supported for DMA targets at AHB South Bus. This is due to hardware restriction. Therefore, all DMA transactions originated or designated the south AHB bus peripherals is carried out in single transaction mode. • The DMA access component is fully tested on SDRAM and flash devices only.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 9 This chapter describes the Intel® IXP400 Software v2.0’s “Ethernet Access API” access-layer component. 9.1 What’s New The following changes and enhancements were made to this component in software release 2.0: • The Ethernet subsystem has been enhanced to include support for the Intel® IXP46X Product Line of Network Processors. This includes supporting the MII interface attached to NPE-A.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API The data path for each of these devices is accessible via dedicated NPEs. One Ethernet MAC is provided on each NPE. The NPEs are connected to the North AHB for access to the SDRAM where frames are stored. The control access to the MAC registers is via the APB Bridge, which is memory-mapped to the Intel XScale core.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 9.3.2 Queue Manager The AHB Queue Manager is a hardware block that communicates buffer pointers between the NPE cores and the Intel XScale core. The IxQMgr API provides the queuing services to the access-layer and other upper level software executing on the Intel XScale core.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 9.4 Ethernet Access Layers: Component Features The Ethernet access component features may be divided into three areas: • Data Path — Responsible for the transmission and reception of IEEE 803.2 Ethernet frames. The Data Path is performed by IxEthAcc. • Control Path — Responsible for the control of the MAC interface characteristics and some learning/filtering database functions.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Figure 48. Ethernet Access Layers Block Diagram MAC Filtering/VLAN/Firewall, etc... Database Management (IxEthDb) PHY Management (IxEthMii) IxEthAcc Component IxQMgr Statistics Buffer Management IxNpeMh Data Path Tx/Rx IxOSAL ® Intel XScale Core Control Registers NPE Core Message Bus AHB Queue Manager 10/100Bt MAC NPEs ` Ethernet PHY Media Assist Physical Data Path System Config 9.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 9.5.1 Port Initialization Prior to any operation being performed on a port, the appropriate microcode must be downloaded to the NPE using the IxNpeDl component. The IxEthAccPortInit() function initializes all internal data structures related to the port and checks that the port is present before initialization. The Port state remains disabled even after IxEthAccPortInit() has been called.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 3. Register a callback function for the port. This function will be called when the transmission buffer is placed in the TxDone queue. 4. After configuring the port, the transmitting port must be enabled in order for traffic to flow. 5. Submit the frame, setting the appropriate priority. This places the IX_OSAL_MBUF on the transmit queue for that port. 6. IxEthAcc transmits the frame on the wire.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Figure 50. Ethernet Transmit Frame Data Buffer Flow Codelet or client application 1. Initializations , Port Enables, Callback Registration... 8. TxDoneCallback (Port 0) TxDoneCallback (Port 1) TxDoneCallback (Port 2) 2. Frame Submit (Port 0) Frame Submit (Port 1) Frame Submit (Port 2) FIFO_PRIORITY FIFO_NO_PRIORITY IxEthAcc 3b. Load Tx Queues directly Sw queue for deferred submission 3a.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Tx FIFO No Priority If the selected discipline is FIFO_NO_PRIORITY, then all frames may be directly submitted to the IxQMgr queue for that port if there is room on the port. Frames that cannot be queued in the IxQMgr queue are stored in an IxEthAcc software queue for deferred submission to the IxQMgr queue. The IxQMgr threshold in the configuration can be quite high.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Figure 51. Ethernet Receive Frame API Overview 1. IxNpeDlNpeInitAndStart (ImageID) 8. free ixp_buf Rx Data Client 2. IxEthAccPortInit (portId) 3. IxEthAccPortRxDoneCallbackRegister (portID, callbackfn, callbacktag) 4. IxEthAccPortRxFreeReplenish (portID, ixp_buf *) 7. (* IxEthAccPortRxCallback) (callbacktag, ixp_buf *, portID) 5.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 9.5.3.2 Receive Buffer Management and Priority The key interface from the NPEs to the receive data path (IxEthAcc) is a selection of queues residing in the queue manager hardware component. These queues are shown in Figure 52. Buffer Sizing The receive data plane subcomponent must provide receive buffers to the NPEs.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Rx FIFO No Priority Received frames from all NPEs are multiplexed onto one queue manager queue. The IxEthAcc component will de-multiplex the received frames and call the associated user level callback Codelet or client application 1. Initializations , Callback Registration... 9. Client must free buffers, replenish PortRxFree queues 2. PortRxFreeReplenish (Port 0) PortRxFreeReplenish (Port 1) PortRxFreeReplenish (Port 2) 8.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API This is configured using the ixEthAccRxSchedulingDisciplineSet() function. Rx FIFO Priority (QoS Mode) IxEthAcc can support the ability to prioritize frames based upon 802.1Q VLAN data on the receive path. This feature requires a compatible NPE microcode image with VLAN/QoS support.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Figure 52. Ethernet Receive Plane Data Buffer Flow Codelet or client application 1. Initializations , Callback Registration... 9. Client must free buffers, replenish PortRxFree queues 2. PortRxFreeReplenish (Port 0) PortRxFreeReplenish (Port 1) PortRxFreeReplenish (Port 2) 8. RxCallback (Port 0) RxCallback (Port 1) RxCallback (Port 2) 3. PortEnable (Port 0) PortEnable (Port 1) PortEnable (Port 2) IxEthAcc 7.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API IPv4 Payload Detection For every received frame delivered to the Intel XScale core, the NPE microcode reports whether the payload of the frame is an IPv4 packet by setting the ixp_ne_flags.ip_prot flag bit in the buffer header (as described in Table 22 on page 152). The NPE microcode examines the Length/Type field to determine whether the payload is IP. A value of 0x0800 indicates that the payload is IP.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API The relationship between IxEthAcc, IxEthDB, and IxEthMii is shown in Figure 53. Figure 53. IxEthAcc and Secondary Components Linux* Ethernet stack VxWorks* Ethernet stack Linux Eth driver (ixp425_eth.c) VxWorks END driver (IxEthAccEnd.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 9.6.1 Ethernet MAC Control The role and responsibility of this module is to enable clients to configure the Ethernet coprocessor MACs for both NPEs. This API permits the setting and retrieval of uni-cast and multi-cast addresses, duplex mode configuration, FCS appending, frame padding, promiscuous mode configuration, and reading or writing from the MII interface. 9.6.1.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API This feature is available on a per-port basis and should be set before a port is enabled. 9.6.1.5 MAC Filtering The MAC subcomponent within the Ethernet NPEs is capable of operation in either promiscuous or non-promiscuous mode. An API to control the operation of the MAC is provided. Warning: Always use the ixEthAcc APIs to Set and Clear Promiscuous Mode.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API 9.6.1.7 NPE Loopback Two functions are provided that enable or disable NPE-level Ethernet loopback for the NPE ports. This is useful for troubleshooting the data path. ixEthMiiPhyLoopbackEnable() configures the PHY to operate in loopback mode, while ixEthAccNpeLoopbackEnable() can be used to test the capability of the Ethernet MAC coprocessor to loopback traffic. 9.6.1.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API IX_OSAL_MBUFs The buffer descriptor format supported is the IX_OSAL_MBUF, which is defined in Chapter 3. The Ethernet NPE firmware expects that all such structures (i.e., IX_OSAL_MBUF structures) are aligned to 32-byte boundaries. The NPE is capable of handling chained IX_OSAL_MBUFs (i.e.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Table 19. IX_OSAL_MBUF Header Definitions for the Ethernet Subsystem (Sheet 1 of 3) Queue Field Description Eth Rx Free Eth Rx Eth Tx ixp_ne_next Physical address of the next IX_OSAL_MBUF in a linked list (chain) of buffers. For the last IX_OSAL_MBUF in a chain (including the case of a single, unchained IX_OSAL_MBUF containing an entire frame), ixp_ne_next contains the value 0x00000000.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Table 19. IX_OSAL_MBUF Header Definitions for the Ethernet Subsystem (Sheet 2 of 3) Queue Field Description Eth Rx Free Eth Rx Eth Tx ixp_ne_flags.new_src New source address flag. A value of 0 indicates that a matching entry for the frame's source MAC address exists in the filtering database; a value of 1 indicates that no matching entry could be found.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Table 19. IX_OSAL_MBUF Header Definitions for the Ethernet Subsystem (Sheet 3 of 3) Queue Field Description Eth Rx Free Eth Rx Eth Tx ixp_ne_flags.vlan_en Transmit path VLAN functionality enable flag. A value of 0 indicates that all transmit path VLAN services, including VLAN ID-based filtering and VLAN ID-based tagging/untagging, should be disabled for the frame.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Table 21. IX_OSAL_MBUF “Port ID” Field Values Field NPE ID PORT ID Bit Position Values 5.4 Ethernet-capable NPE identifier, defined as follows: 0x0 - NPE A (on Intel® IXP46X product line processors only) 0x1 - NPE B 0x2 - NPE C 0x3 - Reserved 3..0 Sequential MII port number within the range of supported MII ports for the specified NPE.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API • IxEthAccMibIIStatsGet() — Returns the statistics maintained for a port • IxEthAccMibIIStatsGetClear() — Returns and clears the statistics maintained for a port • IxEthAccMibIIStatsClear() — Clears the statistics maintained for a port Table 23.
Intel® IXP400 Software Access-Layer Components: Ethernet Access (IxEthAcc) API Table 24. Managed Objects for Ethernet Transmit April 2005 154 Object Increment Criteria dot3StatsSingleCollisionFrames RFC-2665 definition dot3StatsMultipleCollisionFrames RFC-2665 definition dot3StatsDeferredTransmissions RFC-2665 definition Note that this statistic will erroneously increment when 64-byte (or smaller) frames are transmitted.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API 10 This chapter describes the Intel® IXP400 Software v2.0 “Ethernet Database API” access-layer component. 10.1 Overview To minimize the unnecessary forwarding of frames, an IEEE 802.1d-compliant bridge maintains a filtering database. IxEthDB provides MAC address-learning and filtering database functionality for the Ethernet NPE interfaces.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API • 802.1p QoS • 802.3 / 802.11 frame conversion • Spanning Tree Protocol port settings IxEthDB also has several more generalized features that relate to the databases and the API itself: • Database management • Port Definitions • Feature Control 10.3.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API Filtering can also be done according to some characteristics of a frame received on a port, such as frames exceeding a maximum frame size or frames that do not include VLAN tagging information. For example, EthDB provides a facility to set the maximum frame size that should be accepted for each NPE-based port. This means that if a port receives a frame that is larger than the maximum frame size, that frame will be filtered.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API — Port 0 searches for the destination address (00:00:00:00:00:01) in its learning tree, it is found therefore Port 0 knows that both Node 1 and Node 2 are connected on the same side of the network, and this network already has a frame forwarder (in this case Hub A) – the frame is filtered (dropped) to prevent unnecessary propagation 10.3.1.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API Warning: The id value assigned to NPE ports in IxEthDbPortDefs.h may not be the same as the value used to identify ports in the IXP_BUF fields written by the NPE’s, as documented in Table 21. The Ethernet device driver for the supported operating systems may enumerate the NPE ports differently as well.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API address entries and can expire older entries as appropriate. This is tied into the database maintenance functionality, further documented in “Database Maintenance” on page 160. When a record age exceeds the IX_ETH_DB_LEARNING_ENTRY_AGE_TIME definition, the record will be removed at the next maintenance interval. IX_ETH_DB_LEARNING_ENTRY_AGE_TIME is 15 minutes by default, but may be changed as appropriate.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API FCS, for example) that causes the frame to exceed the maximum frame size, the frame will not be transmitted. The TxLargeFramesDiscard counter will be incremented (see Chapter 9). The maximum supported value is 16,320 bytes.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API • allow / white list state – only incoming packets with a source MAC addresses found in the firewall list are allowed • deny / black list state – all incoming packets are allowed except for those whose source address is found in the firewall list. The firewall lists support a maximum of 31 addresses. This feature is disabled by default and there are no pre-defined firewall records.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API 10.3.4.1 Background – VLAN Data in Ethernet Frames According to IEEE802.3, an untagged or normal Ethernet frame has the fields listed in Table 25. Table 25.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API Table 27. VLAN Tag Format 0 1 2 3 4 5 6 7 VLAN TCI 0x810 0x0/Port ID Priority CFI VLAN TPID 8 9 15 10 11 12 13 14 15 16 17 14 18 19 20 21 22 23 24 25 26 13 27 28 29 30 31 12 VLAN ID The VLAN tagged Ethernet frame format, as specified in IEEE802.3, is as listed in Table 26. A received frame is considered to be VLAN-tagged if the two bytes at offset 12-13 are equal to 0x8100.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API ACCEPT_ALL_FRAMES or VLAN_TAGGED_FRAMES. Failure to do so will filter all VLAN traffic except those frames tagged with VLAN ID 0. The acceptable frame type filter can be any of the values above. Additionally, filters can be combined (ORed) to achieve additional effects: • Accept all frames – equivalent to accept tagged and accept untagged. Used to declare hybrid VLAN trunks.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API For example, Port 1 is configured with a PVID set to 12 and VLAN membership group of {1, 2, 10, 12, 20 to 40, 100, 102, 3000 to 3010}. If VLAN membership filtering is enabled and acceptable frame type filtering is configured appropriately for the port, the following scenarios are possible: • If tagging is not enabled, untagged frames will be left untagged and passed through,.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API • The frame IX_OSAL_MBUF header can contain override information (flags – see above) explicitly stating whether the frame is to be tagged or not. • Tagging information (802.1Q tag) is contained in the IX_OSAL_MBUF header. • The frame VLAN ID, if any, is compared against the transmit port VLAN membership table and discarded if not found in the membership table.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API An overview of the Egress tagging process is shown in Figure 55. The figure shows the decision tree for an untagged frame. The process is identical for a tagged frame. Figure 55. Egress VLAN Control Path for Untagged Frames EthAcc EthDB Outgoing frame (untagged) Preamble Start frame Dest MAC addr Yes Src MAC addr Len Data Pad FCS No Tagging override? mbuf->ixp_ne_tx_flags.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API Table 28. Egress VLAN Tagging/Untagging Behavior Matrix (Continued) Tag Mode (1) Frame Status (2) Action Tag Untagged The NPE microcode inserts a VLAN tag into the frame. The VLAN tag to be inserted is created by concatenating a VLAN TPID field (always 0x8100) with the value of the ixp_ne_vlan_tci field from the IX_OSAL_MBUF header.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API 10.3.5.2 Receive Priority Queuing Incoming frames will be classified into an internal traffic class, either by mapping the 802.1Q priority field (if available) into an internal traffic class or by using the default traffic class associated with the incoming port. The incoming frame will be placed on a receive queue depending on its traffic class. Up to four traffic classes and associated queues are supported.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API Traffic class for untagged frames (unexpedited traffic) is automatically selected from the default traffic class associated with the port. The default port traffic class is computed from the default port 802.1Q tagging information, configured as described in “Ingress Tagging and Tag Removal” on page 165. The first three bits from the default 802.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API At initialization, a default traffic class mapping is provided, as shown Table 29. These values apply to NPE images that include four default traffic classes. When using NPE images that provide a larger number of priority queues, the values may differ. Table 29.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API Table 31. IEEE802.11 Frame Control (FC) Field Format 15 14 13 12 11 subtype 10 9 8 protocol version type 7 6 5 6 3 2 1 0 order WEP more data pwr mgr retry more flag from DS to DS Abbreviations: • FC - Frame Control • DID - Duration / ID • SC - Sequence Control The usage of the 802.11 frame format depends heavily on the source and immediate destination for the frame.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API In 802.3 frames, there is a 2-byte Length/Type field, the interpretation of which depends on whether its value is smaller than 0x0600. When the value of this field is less than 0x0600, it is interpreted as Length, and the first 8 bytes of the MAC client data field is always the LLC/SNAP header, as defined in 802.2. Such frames are also known as “8802 frames”.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API It is important to note that the IX_OSAL_MBUFs extracted from the EthRxFree queue by the NPE may be used to deliver both IEEE802.3 and IEEE802.11 frames to the client software. The NPE microcode does not make any adjustment to the ixp_ne_data field from the IX_OSAL_MBUF header before writing out the received frame, regardless of the header conversion operation performed. Table 32. 802.3 to 802.11 Header Conversion Rules 802.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API state so that the converted frame is treated as an untagged for the purpose of VLAN egress tagging. To simplify its processing, the NPE Ethernet firmware expects that any 802.11 frame submitted by the client will not have a VLAN tag. Table 33. 802.11 to 802.3 Header Conversion Rules Input 802.11 Frame Values Output 802.3 Frame Field Values ixp_ne_flags .
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API 10.3.7 Spanning Tree Protocol Port Settings The IxEthDB component provides an interface that can configure each NPE port to act in a “Spanning Tree Port Blocking State”. This behavior is available in certain NPE microcode images, and can be configured independently for each NPE. Spanning-Tree Protocol (STP), defined in the IEEE 802.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API 10.4.3 Feature Set IxEthDB is structured in a feature set, which can be enabled, disabled and configured at run time. Since IxEthDB provides support for NPE features, the feature set presented to client code at any one time depends on the run-time configuration of the NPEs. IxEthDB can detect the capabilities of each NPE microcode image and expose only those features supported by that image. Table 34.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API done using ixEthDBUserFieldGet(). Note that neither IxEthDB, nor the NPE microcode, ever uses the user-defined field for any internal operation and it is not aware of the significance of its contents. The field is only stored as a pointer.
Intel® IXP400 Software Access-Layer Components: Ethernet Database (IxEthDB) API Transmit Traffic For transmission services, the NPE calculates a valid FCS as its final step prior to transmitting the frame to the PHY. FCS appending should be enabled when a port is configured for the following features: • VLAN Egress tagging/untagging • 802.11 to 802.3 Frame Conversion April 2005 180 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: Ethernet PHY (IxEthMii) API 11 This chapter describes the Intel® IXP400 Software v2.0’s “Ethernet PHY API” access-layer component. 11.1 What’s New The following changes or enhancements where made to this component in software release 2.0. • This component has been updated to support the Intel® LXT9785HC 10/100 Ethernet Octal PHY that is on the Intel® IXDP465 Development Platform. 11.
Intel® IXP400 Software Access-Layer Components: Ethernet PHY (IxEthMii) API ixp400_xscale_sw/src/ethMii/IxEthMii_p.h Table 35. PHYs Supported by IxEthMii Intel® LXT971 Fast Ethernet Transceiver Intel® LXT972 Fast Ethernet Transceiver Intel® LXT973 Low-Power 10/100 Ethernet Transceiver (LXT973 and LXT973A) Micrel / Kendin* KS8995 5 Port 10/100 Switch with PHY 11.5 Dependencies IxEthMii is used by the EthAcc codelet and is dependant upon the IxEthAcc access-layer component and IxOSAL.
Intel® IXP400 Software Access-Layer Components: Feature Control (IxFeatureCtrl) API 12 This chapter describes the Intel® IXP400 Software v2.0’s “Feature Control API” access-layer component. IxFeatureCtrl is a component that detects the capabilities of the Intel® IXP42X Product Line of Network Processors and IXC1100 Control Plane Processor and Intel® IXP46X Product Line of Network Processors.
Intel® IXP400 Software Access-Layer Components: Feature Control (IxFeatureCtrl) API stepping. For the IXP42X product line, this register is used to determine the maximum core clock speed. Note: CP15, Register 0 is read-only. • EXP_UNIT_FUSE_RESET register in the Expansion Bus Controller - A software copy of this register, called the Feature Control Register, can be created and manipulated by this software component.
Intel® IXP400 Software Access-Layer Components: Feature Control (IxFeatureCtrl) API 12.3.2 Using the Feature Control Register Functions The ixFeatureCtrlHwCapabilityRead( ) function utilizes the EXP_UNIT_FUSE_RESET register for detecting host processor capabilities. A software structure for storing the changeable values for each option is provided, and accessed using the ixFeatureCtrlRead ( ).
Intel® IXP400 Software Access-Layer Components: Feature Control (IxFeatureCtrl) API Table 37. Feature Control Register Values (Sheet 2 of 2) Bits Description 5† HDLC Coprocessor 4† DES Coprocessor 3 † AES Coprocessor 2 † Hashing Coprocessor 1 † 0† USB Coprocessor RComp Circuitry † For bit 0 through 15, 18, 20-21 the following values apply: • 0x0 — The hardware component exists and is not software disabled. • 0x1 — The hardware component does not exist, or has been software disabled. 12.
Intel® IXP400 Software Access-Layer Components: Feature Control (IxFeatureCtrl) API processors, and all versions of the IXP46X product line will use the standard ixQMgrDispatcherLoopRunB0 dispatcher. To indication that the ixQMgrDispatcherLoopRunB0LLP dispatcher with Livelock support is desired, use the ixFeatureCtrlSwConfigurationWrite( ) function to set this option to FALSE.
Intel® IXP400 Software This page is intentionally left blank. April 2005 188 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API 13 This chapter describes the Intel® IXP400 Software v2.0’s “HSS-Access API” access-layer component. 13.1 What’s New There are no changes or enhancements to this component in software release 2.0. 13.2 Overview The IxHssAcc component provides client applications with driver-level access to the High-Speed Serial (HSS) and High-Level Data Link Control (HDLC) coprocessors available on NPE A.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Features The HSS access component is used by a client application to configure both the HSS and HDLC coprocessors and to obtain services from the coprocessors. It provides: • Access to the two HSS ports on the IXP4XX product line and IXC1100 control plane processors. • Configuration of the HSS and HDLC coprocessors residing on NPE A. • Support for TDM signals up to a rate of 8.192 Mbps (Quad E1/T1) on an HSS port.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API IxHssAcc presents two “services” to the client application. The Channelized Service presents the client with raw serial data streams retrieved from the HSS port, while the Packetized Service provides packet payload data that has been optionally processed according to the HDLC protocol. IxQMgr is another access-layer component that interfaces to the hardware-based AHB Queue Manager (AQM).
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 59.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API The HSS coprocessor communicates with an external device using three signals per direction: a frame pulse, clock, and data bit. The data stream consists of frames — the number of frames per second depending on the protocol. Each frame is composed of time slots. Each time slot consists of 8 bits (1 byte) which contains the data and an indicator of the time slot’s location within the frame.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Table 40. Jitter Definitions Jitter Type Jitter Definition Pj = Period − Period ( i ) ( i ) average Period Jitter (Pj) Cj = Pj − Pj ( i ) ( i 1 ) ( i ) + Cycle to Cycle Jitter (Cj) Aj Pj =∑ (i) Wander or Accumulated Jitter (Aj) i Table 41. HSS Frame Output Characterization Note: HSS Tx Freq. Frame Size (Bits) Actual Frame Length (µs) Frame Length Error (PPM) 512 KHz 32 62.496249 -60.0096 1.536 MHz 96 62.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API The time slots within a stream can be configured as packetized (raw or HDLC, 64 Kbps, and 56 Kbps), channelized voice64K, or channelized voice56K or left unassigned. “Voice” slots are those that will be sent to the channelized services. For more details, see “HSS Port Initialization Details” on page 197. For packetized time slots, data will be passed to the HDLC coprocessor for processing as packetized data.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API 7. Finally, when the HSS component is no longer needed, ixHssAccPktPortDisable() and/or ixHssAccPktPortDisconnect() — or ixHssAccChanDisconnect() and/or ixHssAccChanPortDisable() — are called. The Disable functions will instruct the NPE’s to stop data handling, while the Disconnect functions will clear all port configuration parameters. The Disconnect functions will automatically disable the port. 13.3.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API • Packetized (HDLC) service is coupled with the HSS port. Packets transmitted using the packetized service access interface will be sent through the HDLC coprocessor and on to the HSS coprocessor. • • • • 13.3.7 Tx and Rx TDM slot assignments are identical. Packetized services will use IXP_BUF. Channelized services will use raw buffers.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API • IxHssAccTdmSlotUsage *tdmMap — A pointer to an array defining the HSS time-slot assignment types. • IxHssAccLastErrorCallback lastHssErrorCallback — Client callback to report the last error. The parameter IxHssAccConfigParams has two structures of type IxHssAccPortConfig — one for HSS Tx and one for HSS Rx.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API IxHssAccTdmSlotUsage is an array that take the following values to assign service types to each time slot in a HSS frame: IX_HSSACC_TDMMAP_UNASSIGNED Unassigned IX_HSSACC_TDMMAP_HDLC Packetized IX_HSSACC_TDMMAP_VOICE56K Channelized IX_HSSACC_TDMMAP_VOICE64K Channelized IxHssAccTdmSlotUsage has a size equal to the number of time slots in a frame. IxHssAccLastErrorCallback() is for error handling.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API • UINT8 *rxCircular — A pointer to the Rx data pool allocated by the client as described in previous section. It points to a set of circular buffers to be filled by the received data. This address will be written to by the NPE and must be a physical address. • unsigned numRxBytesPerTS — The length of each Rx circular buffer in the Rx data pool.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 62. Channelized Connect Application Level Channelized Client 1. ixHssAccChanConnect (...) 4. ixHssAccChanPortEnable (...) IxHssAcc IxNpeMh IxQMgr Access Layer 3. Configure NPE 2. Configure HssSync Queue NPE A NPE Physical Interface 5. Enable NPE data flow HSS Port B2386-03 1. The client issues a channelized connect request to IxHssAcc. 2.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API 13.5.2.1 CallBack If the pointer to the rxCallback() is not NULL when ixHssAccChanConnect() is called, an ISR will call rxCallback() to handle Tx/Rx data. It is called when each of N channels receives bytesPerTStrigger bytes. Usually, a Rx thread is created to handle the HSS channelized service. The thread will be waiting for a semaphore.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 63. Channelized Transmit and Receive Application Level 2b. Callback to client (rxOffset, txOffset, numHssErrs) Channelized Client 3a. ixHssAccChanStatusQuery (...) or 3c. Returns (rxOffset, txOffset, numHssErrs) IxHssAcc 2a. Callback to IxHssAcc (rxOffset, txOffset, numHssErrs) or 3b. IxHssAcc polls for queue status. Returns (rxOffset, txOffset, numHssErrs) IxQMgr Access Layer 1.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API 13.5.3 Channelized Disconnect When the channelized service is not needed any more on a particular HSS port, ixHssAccChanPortDisable() is called to stop the channelized service on that port, then ixHssAccChanDisconnect() is called to disconnect the service. 13.6 HSS Packetized Operation 13.6.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API • unsigned blockSizeInWords — The max tx/rx block size. • UINT32 rawIdleBlockPattern — Tx idle pattern in raw mode. • IxHssAccHdlcFraming hdlcTxFraming — This structure contains the following information required by the NPE to configure the HDLC coprocessor for Tx. • IxHssAccHdlcFraming hdlcRxFraming — This structure contains the following information required by the NPE to configure the HDLC coprocessor for Rx.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 64. Packetized Connect Application Level Packetized Client 1. ixHssAccPktPortConnect (...) 4. ixHssAccPktPortEnable() IxHssAcc IxNpeMh 2. Configure queues. Configure Rx, RxFreeLow, TxDone callbacks. IxQMgr 3. Configure NPE Access Layer NPE A NPE Physical Interface 5. Enable NPE data flow HSS Port B2391-03 1. The client issues a packet service connect request to IxHssAcc. 2.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API When the transmission is done, the TxDone call back function, registered with ixHssAccPktPortConnect(), is called, and the buffer can be returned to IXP_BUF pool using IX_OSAL_MBUF_POOL_PUT_CHAIN().
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 65. Packetized Transmit Application Level Packetized C lient 5. ixH ssAccPktTxD oneC allback (*m Buf, num H ssErrs, pktStatus, txD oneU serId) 1. ixH ssAccPktPortTx ( hssPortId, hdlcPortId, *m Buf for transm it data) IxH ssAcc 2. D escriptor to Tx queue IxQ M gr Access Layer 3. N P E reads data from pointer in Tx queue, transm its data on H SS port. 4.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Here is an example: // get a buffer IX_OSAL_MBUF *rxBuffer; rxBuffer = IX_OSAL_MBUF_POOL_GET(poolId); //IxHssAcc component needs to know the capacity of the IXP_BUF IX_OSAL_MBUF_MLEN(rxBuffer) = IX_HSSACC_CODELET_PKT_BUFSIZE; // give the Rx buffer to the HssAcc component status = ixHssAccPktPortRxFreeReplenish (hssPortId, hdlcPortId, rxBuffer); Usually, an Rx thread is created to handle the HSS packetized service, namely, to handle a
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Alternatively, the client can use its own timer for suppling IXP_BUFs to the queue. This is the case if the pointer for rxFreeLowCallback() passed to ixHssAccPktPortConnect() is NULL. The process is show in Figure 66. Figure 66. Packetized Receive A p p lic a tio n L e v e l P a c k e tiz e d C lie n t 6 . P ro v id e fre e m B u fs 3 . R x C a llb a c k w ith * m B u f 5 . R x F re e L o w C a llb a c k Ix H s s A c c 2 .
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API 13.6.4 Packetized Disconnect When packetized service channel is not needed any more, the function ixHssAccPktPortDisable() is called to stop the packetized service on that channel, and ixHssAccPktPortDisconnect() is called to disconnect the service. This has to be done for each packet service channel.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API An IXP_BUF pool should be created for packetized service by calling function IX_OSAL_MBUF_POOL_INIT() of the IxOsBuffMgt API with the IXP_BUF size and number of IXP_BUF needed. For example: IxHssAccCodeletMbufPool **poolIdPtr; UINT32 numPoolMbufs; UINT32 poolMbufSize; *poolIdPtr = IX_OSAL_MBUF_POOL_INIT(numPoolMbufs, poolMbufSize, "HssAcc Codelet Pool"); A IXP_BUF can be obtained from the pool by calling IX_OSAL_MBUF_POOL_GET().
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 67. HSS Packetized Receive Buffering IXP_BUFs 4. Data (HDLC frame or RAW block) for each packet-pipe written to appropriate mbuf, specified by descriptor. Steps 3 and 4 repeated to chain mbufs as required. HssPacketizedRx queue 5. Descriptor returned when entire frame/ block received. If chaining, only first descriptor returned. HssPacketizedRxFree0 HssPacketizedRxFree3 queues NPE-A Hss Packetized Rx Operation 3.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 68. HSS Packetized Transmit Buffering IXP_BUFs HssPacketizedTx0 HssPacketizedTx3 queues 2. Data (HDLC frame or RAW block) for each packet-pipe read from appropriate mbuf, specified by descriptor 1. Descriptor read from packetpipe-specific, Tx queue NPE-A Hss Packetized Tx Operation 3. HDLC frame processing performed on each packet-pipe configured for HDLC mode HssPacketizedTxDone queue 5.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API All the buffers have the same length. When the channelized service is initialized by ixHssAccChanConnect(), the pointer to the pool, the length of the circular buffers, and a parameter bytesPerTStrigger are passed to IxHssAcc, as well as a pointer to the an ixHssAccChanRxCallback() Rx callback function. Figure 69 shows how the circular buffers are filled with data received though the HSS ports.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 69. HSS Channelized Receive Operation Client Rx Buffer in SDRAM RxCircBufSizeB .. . HssChannelizedRxTrigger queue ... Total Size = (N+1)*CircBufSizeB F0-TSb F1-TSb F2-TSb Circular buffer for channel 1 .. . RxCircBufSizeB F0-TSa F1-TSa F2-TSa Circular buffer for channel 0 .. . RxCircBufSizeB 2.
Intel® IXP400 Software Access-Layer Components: HSS-Access (IxHssAcc) API Figure 70. HSS Channelized Transmit Operation F0-TS a F1-TS a F2-TS a TxPtrC ircBuf .. . First Tx-data block for channel 1 .. ... . P trC hN P trC h0 P trC h1 P trC h2 P trC hN F0-TSz F1-TSz F2-TSz .. . .. . TxPtrC ircBufSizeB F0-TS b F1-TS b F2-TS b ... C ircular buffer for pointers to Tx-data blocks .. . P trC h0 P trC h1 P trC h2 First Tx-data block for channel 0 .. . .. . P trC h0 P trC h1 P trC h2 1.
Intel® IXP400 Software This page is intentionally left blank. April 2005 218 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: NPE-Downloader (IxNpeDl) API 14 This chapter describes the Intel® IXP400 Software v2.0’s “NPE-Downloader API” access-layer component. 14.1 What’s New The following changes and enhancements were made to this component in software release 2.0: • New NPE microcode images for NPE A were added to support the NPE-based Ethernet interface available on NPE A of the Intel® IXP46X Product Line.
Intel® IXP400 Software Access-Layer Components: NPE-Downloader (IxNpeDl) API The “Microcode from File” feature is only available for Linux. All other supported operating systems use obtain the NPE microcode from the compiled object code. The purpose of providing the “Microcode from File” feature is to allow distribution of IXP400 software and the NPE microcode under different licensing conditions for Linux. Refer to the Intel® IXP400 Software Release Notes for further instructions on using this feature.
Intel® IXP400 Software Access-Layer Components: NPE-Downloader (IxNpeDl) API Table 42. NPE-A Images Image Name Description IX_NPEDL_NPEIMAGE_NPEA_HSS0 NPE Image ID for NPE-A with HSS-0 Only feature. It supports 32 channelized and 4 packetized. X_NPEDL_NPEIMAGE_NPEA_HSS0_ATM_SPHY_1_PORT NPE Image ID for NPE-A with HSS-0 and ATM feature. For HSS, it supports 16/32 channelized and 4/0 packetized. For ATM, it supports AAL 5, AAL 0 and OAM for UTOPIA SPHY, 1 logical port, 32 VCs.
Intel® IXP400 Software Access-Layer Components: NPE-Downloader (IxNpeDl) API Table 43. NPE-B Images Image Name Description IX_NPEDL_NPEIMAGE_NPEB_DMA NPE Image ID for NPE-B with DMA-Only feature. IX_NPEDL_NPEIMAGE_NPEB_ETH NPE Image ID for NPE-B with Ethernet-Only feature. This image definition is identical to the image below: IX_NPEDL_NPEIMAGE_NPEB_ETH_LEARN_FILTER_SPAN_F IREWALL.
Intel® IXP400 Software Access-Layer Components: NPE-Downloader (IxNpeDl) API Table 44. NPE-C Images (Sheet 2 of 2) Image Name Description NPE Image ID for NPE-C with Basic Ethernet Rx/Tx, which includes: IX_NPEDL_NPEIMAGE_NPEC_ETH_SPAN_FIREWALL_VLA • SPANNING_TREE N_QOS_HDR_CONV • FIREWALL • VLAN/QoS • 802.3/802.
Intel® IXP400 Software Access-Layer Components: NPE-Downloader (IxNpeDl) API The IxNpeDl should be uninitialized prior to unloading an application module or driver. (This will unmap all memory that has been mapped by IxNpeDl.) If possible, IxNpeDl should be uninitialized before a soft reboot. Here is a sample function call to uninitialize IxNpeDl: ixNpeDlUnload(); Note: 14.
Intel® IXP400 Software Access-Layer Components: NPE Message Handler (IxNpeMh) API 15 This chapter describes the Intel® IXP400 Software v2.0’s “NPE Message Handler API” accesslayer component. 15.1 What’s New There are no changes or enhancements to this component in software release 2.0. 15.2 Overview This chapter contains the necessary steps to start the NPE message-handler component. Additionally, information has been included about how the Message Handler functions from a high-level view.
Intel® IXP400 Software Access-Layer Components: NPE Message Handler (IxNpeMh) API The solicited callback list contains the list of callbacks corresponding to solicited messages not yet received from the NPE. The solicited messages for a given ID are received in the same order that those soliciting messages are sent, and the first ID-matching callback in the list always corresponds to the next solicited message that is received. 15.
Intel® IXP400 Software Access-Layer Components: NPE Message Handler (IxNpeMh) API 15.4 Uninitializing IxNpeMh The IxNpeMh should be uninitialized prior to unloading a kernel module (this will unmap all memory that has been mapped by IxNpeMh). If possible, IxNpeMh should also be uninitialized before a soft reboot.
Intel® IXP400 Software Access-Layer Components: NPE Message Handler (IxNpeMh) API Figure 71. Message from Intel XScale® Core Software Client to an NPE Client 1. Send Message Customer / Demo Code 0x00 callback callback n+k 0x01 callback ... ... callback n+1 0xff callback IxNpeMh callback n 2. Send Message Access Driver NPEs NPE A NPE B NPE C B2395-01 15.5.2 Sending an NPE Message with Response In this case, the client’s message requires a response from the NPE.
Intel® IXP400 Software Access-Layer Components: NPE Message Handler (IxNpeMh) API 7. Because this is a solicited message, the first ID-matching callback is removed from the solicited callback list and invoked to pass the message back to the client. If no ID-matching callback is found, the message is discarded and an error reported. Figure 72. Message with Response from Intel XScale® Core Software Client to an NPE Client 1. Send Message Customer / Demo Code 7. Response Callback 2.
Intel® IXP400 Software Access-Layer Components: NPE Message Handler (IxNpeMh) API 5. Since this is an unsolicited message, the IxNpeMh component invokes the corresponding unsolicited callback to pass the message back to the client. Figure 73. Receiving Unsolicited Messages from NPE to Software Client Client Customer / Demo Code 0x00 callback 1. Register Callback 6. Message Callback 2. Save Callback callback n+k 0x01 callback ... ... 5.
Intel® IXP400 Software Access-Layer Components: NPE Message Handler (IxNpeMh) API 15.7 Dependencies The IxNpeMh component’s dependencies (as shown in Figure 74) are: • Client software components must use the IxNpeMh component for messages to and from the NPEs. • The IxNpeMh component must use IxOSAL for error-handling, resource protection, and registration of ISRs. Figure 74.
Intel® IXP400 Software This page is intentionally left blank. April 2005 232 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API 16 This chapter describes Intel® IXP400 Software v2.0’s “Parity Error Notifier (IxParityENAcc) API” access-layer component. 16.1 What’s New This is a new component for software release 2.0. Note: 16.2 The PCI support described in this chapter is not supported in software release 2.0. Introduction Many components in the IXP46X network processors provide parity error detection capabilities.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API For the purposes of this document, the following terms will be used as defined below. Error Correction Logic/Error Correction Code The Error Correction Logic in the Memory Controller Unit (MCU) generates the ECC code (which requires additional bits for the code word) for DDR SDRAM reads and writes. For reads, this logic compares the ECC code read with the locally generated ECC code.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API 16.2.2.2 Switching Coprocessor in NPE B (SWCP) The Switching Coprocessor generates 8-bit parity – 1 bit per each byte of the 64 bit (8-byte) entries in its SRAM. These parity bits will be generated and captured along with the 64 bits of data during a write operation. The subsequent read operation will again generate parity bits from the 64 bits of data and compare against the ones stored.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API 16.2.2.7 Secondary Effects of Parity Interrupts If the Intel XScale core detects an error on the AHB bus or on its private DDR memory interface (MPI), an exception will be generated that will be serviced by its fault handler (such as data abort, or prefetch abort exception handler). The MCU will also generate a parity interrupt in this case.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API 16.3 IxParityENAcc API Details 16.3.1 Features The parity error access component provides the following features: • Interface to the client application to register a call back handler for application-specific processing with respect to the source of failure in the notification.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API IxParityENAcc depends on various hardware registers to fetch the parity error information upon receiving an interrupt due to parity error. It then notifies the client application through the means of the callback handler with parity error context information. IxParityENAcc also makes use of IxOSAL to access the underlying Operating System features such as IRQ registration, locks, and register access.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API • Parity Error Recovery • Parity Error Prevention This section summarizes the high-level activities involved with these high-level tasks, and then presents specific usage scenarios. 16.4.1 Summary Parity Error Notification Scenario The interface between the client application and IxParityENAcc is explained in detail in the API source-code documentation.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API 3. When a parity error occurs, the interrupt will fire and invoke the ISR of the IxParityENAcc component. 4. IxParityENAcc, in turn, invokes the client callback. 5. The client or data abort handler callback routine will then fetch the parity error context details and take appropriate action. 6. The client will then request to clear the interrupt condition.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API Table 47. Parity Error Interrupt Deassertion Conditions (Sheet 2 of 2) Interrupt Bit Source Int60 AQM Int61 MCU Int62 16.4.2 EXP API Invoked by... Action Taken During Interrupt Clear Client Callback Interrupt will be masked off at the interrupt controller so that it will not trigger continuously.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API For multi-bit parity errors, no error correction is possible and the Intel XScale core will be notified. The client application should handle such notifications. 16.4.3 Summary Parity Error Prevention Scenario IxParityENAcc does not perform parity error prevention tasks. This should be done by the client application.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API It is important to note that if an interrupt condition is not cleared then it will result in the parity interrupt being triggered again. Figure 77–Figure 83 show the process flow that occurs in several data abort and parity error scenarios. Figure 77.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API Figure 79. Data Abort followed by Unrelated Parity Error Notification Data Abort IxParityENAcc IxParityENAccParityErrorContextGet(*pecMessage) Client Callback Gets Parity Interrupt Status: IX_PARITYENACC_SUCCESS Source = Multi-bit Address = [Multi-bit] {Both multi and single-bit parity error on the MCU detected due to non XScale access when Data Abort occurred.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API Figure 81. Data Abort Caused by Parity Error Data Abort IxParityENAcc Client Callback Gets Parity Interrupt Status: IxParityENAccParityErrorContextGet(*pecMessage) Source = Multi-bit Address = [Multi-bit ] IX_PARITYENACC_SUCCESS IxParityENAccParityErrorInterruptClear(*pecMessage) Clear Parity Interrupt: IX_PARITYENACC_SUCCESS {Multi-bit parity error on MCU detected when Data Abort occurred.
Intel® IXP400 Software Access-Layer Components: Parity Error Notifier (IxParityENAcc) API Figure 83. Data Abort with both Related and Unrelated Parity Errors Data Abort IxParityENAcc IxParityENAccParityErrorContextGet(*pecMessage ) IX_PARITYENACC_SUCCESS Client Callback Gets Parity Interrupt Status: Source = MB Address = [MB] ExamineParityErrorSource IxParityENAccParityErrorInterruptClear(*pecMessage ) IX_PARITYENACC_SUCCESS Clear Parity Interrupt: {Multi-bit parity error on MCU and Non-MCU (E.g.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API 17 This chapter describes the Intel® IXP400 Software v2.0’s “Performance Profiling API” accesslayer component. 17.1 What’s New There are no changes or enhancements to this component in software release 2.0. However, the Internal Bus PMU registers on the IXP46X network processors are not identical to the Internal Bus PMU registers on the IXP42X product line processors.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API 17.3 Intel XScale® Core PMU The purpose of the Intel XScale core PMU is to enable performance measurement and to allow the client to identify the “hot spots” of a program. These hot spots are the sections of a program that consume the most number of cycles or cause process stalls due to events like cache misses, branches, and branch mispredictions.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API Event-based sampling will allow the client to identify the “hot spots” of the program for further optimization. In this method, the sampling rate is the number of events before a counter overflow interrupt is generated. This sampling rate is defined by the client. As in time-based sampling, the PC value of each sample and frequency will be determined.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API • SDRAM controller usage — Usage monitored in all eight pages of the SDRAM, i.e., the pages used and how often they are used. This also includes percentage usage and number of hits per second. • SDRAM controller miss percentage — Identifies number of misses and rate of misses when accessing the SDRAM. A high miss rate would indicate a slow system.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API Figure 84. IxPerfProfAcc Dependencies Client IxPerfProfAcc ixOsal Intel XScale ® Core PMU A B PMU Internal Bus Component A depends on component B. B2399-03 The client will call IxPerfProfAcc to access specific performance statistics of the Intel XScale core’s PMU and internal bus PMU.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API 17.9 Threading The Xcycle component spawns a new task to work in the background. This task is spawned with the lowest priority. This is to avoid pre-emptying other tasks from running. This task registers a dummy function that also triggers the measurement of idle cycles.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API Figure 85.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API The number of events that can be monitored simultaneously range from zero to four at a time. When the number of event to monitor is set to 0, only clock counting is performed. The clock count can be set to be incremented by one at each 64th processor clock cycle or at every processor clock cycle. The steps needed to run this utility are: 1.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API If the user has declared a variable IxPerfProfAccXscalePmuResults eventCountStopResults, the user may then print out the result for all the counters as shown in Figure 86. Figure 86.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API 2. To end the time sampling, call the stop function, with parameters: ixPerfProfAccXscalePmuTimeSampStop( IxPerfProfAccXscalePmuEvtCnt *clkCount, IxPerfProfAccXscalePmuSamplePcProfile *timeProfile) This function can only be called once ixPerfProfAccXscalePmuTimeSampStart has been called. It is the user’s responsibility to allocate the memory for the pointers before calling this function.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API iii. Print out the first five elements: for (i=0; i++; i<5) { printf("timeprofile element %d pc value = 0x%x\n", i, timeProfile[i].programCounter); printf("timeprofile element %d freq value = %d\n", i, timeProfile[i].freq); } These profile results show the places in the user’s code that are most frequently being executed and that are taking up the most processor cycles.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API The steps needed to run this utility are: 1.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API As this file system is temporary, the user is required to copy the output file into a permanent location or else the results will be lost when a new round of sampling is done or when the system is stopped or rebooted.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API 17.10.1.4 Using Intel XScale® Core PMU to Determine Cache Efficiency In this example, the user would like to monitor the instruction cache efficiency mode. The user would use the event counting process to count the total number of instructions that were executed and instruction cache misses requiring fetch requests to external memory. The remaining two counters will not provide relevant results in this example.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API 17.10.2 Internal Bus PMU The Internal Bus PMU utility enables performance monitoring of components accessing or utilizing the north and south bus, provides statistics of the north and south bus and SDRAM, and allows the user to read the value of the Previous Master Slave Register.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API For example: — If the user has declared a variable “IxPerfProfAccBusPmuResults busPmuResults,” the user may then print out the value of all seven of the PEC counters. The user should be aware that in the lower 27-bit counter, it only stores values up to 27 bits before causing an overflow.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API 4. Obtain the results by calling: ixPerfProfAccBusPmuResultsGet (&results) 5. Print the value of all the PECs: for (i = 0; i< IX_PERFPROF_ACC_BUS_PMU_MAX_PECS ; i++) { printf ("\nPEC %d = upper 0x%x lower 0x%x ", i, results.statsToGetUpper32Bit[i], results.statsToGetLower27Bit[i]); } 6. Print the total value of PECs 1-3, and PEC 7.
Intel® IXP400 Software Access-Layer Components: Performance Profiling (IxPerfProfAcc) API This pointer is interpreted as “the number of 66-MHz clock ticks for one measurement.” It is stored within the tool while it is being run and serves only as a reference for the user. 2. Create a thread that runs the code to be monitored. To begin the Xcycle measurements, call the start function, with parameter: ixPerfProfAccXcycleStart(UINT32 numMeasurementsRequested) This start the measurements immediately.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API 18 This chapter describes the Intel® IXP400 Software v2.0’s “Queue Manager API” access-layer component. 18.1 What’s New There are no changes or enhancements to this component in software release 2.0. 18.2 Overview The IxQMgr (Queue Manager) access-layer component is a collection of software services responsible for configuring the Advanced High-Performance Bus (AHB) Queue Manager (also referred to by the combined acronym AQM).
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API 18.3 Features and Hardware Interface Figure 89. AQM Hardware Block NPE Queue Control Queue Buffer SRAM AHB Slave Config/Status Registers AHB Flag Bus Int AHB Queue Manager Intel XScale® Core B2415-02 The IxQMgr provides a low-level interface for configuring the AQM, which contains the physical block of static RAM where all the data structures (queues) for the IxQMgr reside.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API — For queues 32-63, the notification source is the assertion or de-assertion of the Nearly Empty flag and cannot be changed. • Performs queue-status query. — For queues 0-31, the status consists of the flags Nearly Empty, Empty, Nearly Full, and Full, Underflow and Overflow. — For queues 32-63, the status consists of the flags Nearly Empty and Full. • • • • 18.4 Determines the number of full entries in a queue.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API 18.7 Configuration Values Table 48 details the attributes of a queue that can be configured and the possible values that these attributes can have (word = 32 bits). Table 48. AQM Configuration Attributes Attribute Values The maximum number of words that the queue can contain. Equals the number of entries x queue entry size (in words). 16, 32, 64, or 128 words The number of words in a queue entry.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API may be called from a client polling mechanism, which calls the dispatcher to read the queues status at regular intervals. In the first example, the dispatcher is called in the context of an interrupt and the dispatcher gets invoked when the queue status change.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API mechanism, although the choice of implementation would depend upon the OS, the application, and the nature of the traffic.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API 5. The ISR invokes the dispatcher. Note: In the context of an interrupt, the dispatcher can also be invoked through a timer-based mechanism. 6. The IxQMgr reads the status flag. 7. The IxQMgr access-layer component calls the registered notification. 8. The client gets the buffer pointer on the Rx queue from the access-layer through the callback.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API 2. When the NPE receives a packet, it updates the Rx queue with location of the buffer. 3. When the watermark is crossed the status flag gets updated corresponding to that queue. 4. The polling thread calls the dispatcher. 5. The dispatcher loop gets the status of the updated flag and resets it. 6. The dispatcher invokes the registered access component. 7.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API To use livelock prevention, only one queue can be set as type periodic. One or more queues may be set as type sporadic using the ixQMgrCallbackTypeSet() function. By default, all the other queues that are not set to be in either a periodic or a sporadic mode are set in IX_QMGR_TYPE_REALTIME_OTHER mode.
Intel® IXP400 Software Access-Layer Components: Queue Manager (IxQMgr) API • Set the callback type for the HSS queue to periodic and the Eth Rx queue to sporadic using the ixQMgrCallbackTypeSet() function. Note: All other queues (Tx queues, RxFree queues and TxDone queues) will have the callback type set to the default callback type of IX_QMGR_TYPE_REALTIME_OTHER. • Start the dispatcher by calling the ixQMgrDispatcherLoop function.
Intel® IXP400 Software Access-Layer Components: Synchronous Serial Port (IxSspAcc) API 19 This chapter describes the Intel® IXP400 Software v2.0’s “SSP Serial Port (IxSspAcc) API” access-layer component. 19.1 What’s New This is a new component for software release 2.0. 19.2 Introduction A Synchronous Serial Port is included in the Intel® IXP46X Product Line of Network Processors. The IxSspAcc API is provided to allow the configuration of the various registers related to the SSP hardware.
Intel® IXP400 Software Access-Layer Components: Synchronous Serial Port (IxSspAcc) API • select SPI SCLK phase – phase of SCLK starts with one inactive cycle and ends with ½ inactive cycle or SCLK starts with ½ inactive cycle and ends with one inactive cycle (only used in SPI format) • select Microwire control word format – 8 or 16 bits • enable/disable the SSP serial port hardware This component also provides status and statistics for: • • • • • • • 19.3.
Intel® IXP400 Software Access-Layer Components: Synchronous Serial Port (IxSspAcc) API 19.4 IxSspAcc API Usage Models 19.4.1 Initialization and General Data Model This description assumes a single client model where there is a single application-level program configuring the SSP interface and initiating I/O operations. The client must first define the initial configuration of the SSP port by storing a number of values in the IxSspInitVars structure.
Intel® IXP400 Software Access-Layer Components: Synchronous Serial Port (IxSspAcc) API 4. For an overrun: a. Interrupt is triggered due to an overrun of the Rx FIFO. b. Rx FIFO Overrun handler/callback is called. c. Rx FIFO Overrun handler/callback extracts data from the Rx FIFO to prevent the overrun from triggering again. d. (processes data extracted and perform necessary steps to recover data loss if possible) e. Rx FIFO Overrun handler/callback returns. f.
Intel® IXP400 Software Access-Layer Components: Synchronous Serial Port (IxSspAcc) API Figure 93.
Intel® IXP400 Software Access-Layer Components: Synchronous Serial Port (IxSspAcc) API 19.4.3 Polling Mode The sequence flow for a client application using this component in polling mode is described below. Refer to Figure 94. 1. Initialize the SSP with interrupts disabled. 2. For transmit operations: a. Check if the Tx FIFO has hit or is below its threshold. b. If it has, then insert data into the Tx FIFO. 3. For receive operations: a. Check if the Rx FIFO has hit or exceeded its threshold. b.
Intel® IXP400 Software Access-Layer Components: Synchronous Serial Port (IxSspAcc) API Figure 94.
Intel® IXP400 Software This page is intentionally left blank. April 2005 282 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API 20 This chapter describes the Intel® IXP400 Software v2.0’s “Time Sync (IxTimeSyncAcc) API” access-layer component. The IxTimeSyncAcc access-layer component enables a client application, which implements the IEEE 1588* Precision Time Protocol (PTP) to configure the IEEE 1588 Hardware Assist block on the Intel® IXP46X Product Line of Network Processors. 20.1 What’s New This is a new component for software release 2.0. 20.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API Figure 95. IxTimeSyncAcc Component Dependencies Client Application (1588 Protocol) Access-Layer Interface IxTimeSyncAcc IxFeatureCtrl IxOSAL H ardware Interface IEEE 1588 Black Solid Arrow - Client Invocation Path Black Dotted Arrow - Interrupt Invocation Path Blue Dotted Arrow - Interface with dependent components Internal External B4392-01 20.2.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API synchronization, and internal processing delays. The slave element/node, after detecting the Sync or Follow_Up message, will begin the process to synchronize its system clock based on the master clock timestamp. The slave may also initiate an synchronization request by sending a Delay_Req message with its local system time as the timestamp to the master.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API Figure 96 shows the location of the IEEE 1588 Hardware Assist block and its main interconnects to other components in the IXP46X network processors. Figure 96.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API The IEEE 1588 Hardware Assist block can also be set explicitly to handle timestamping for all messages detected on a channel, as determined by the detection of an Ethernet Start of Frame Delimiter (SFD). In this scenario, the snapshot registers containing the timestamps will not be locked. This usage model is useful for network traffic analysis applications.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API VLAN-tagged Ethernet frames include an additional four bytes prior to the beginning of the original Ethernet Type/Length field. The IP header immediately follows the Type/Length field. VLAN-tagged Ethernet frames can be identified by the value of 0x8100 at offset 12 and 13 of the Ethernet frame. If the IEEE 1588 Hardware Assist block identifies a value of 0x8100 (i.e.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API • Configure the PTP Ports (NPE channels) to operate in master or slave mode • Poll for Sent Timestamp of the Sync and Delay_Req messages in both master and slave modes • Poll for Receive Timestamp of the Delay_Req and Sync messages in both master and slave modes • Poll for Timestamp of all messages Sent or Received irrespective of master or slave mode • Set and retrieve System Time • Set and retrieve Frequency Scaling Value, bas
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API • Internal errors IxTimeSyncAcc returns IX_SUCCESS when errors are not observed. The client application is expected to handle these errors/values appropriately. 20.4 IxTimeSyncAcc API Usage Scenarios The following scenarios present usage examples of the interface by a client application. They are each independent but, depending on the needs of the client application, could be intermixed. 20.4.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API 2. Auxiliary Master Timestamp 3. Auxiliary Slave Timestamp In order to avoid repeated invocation of the Interrupt Service Routing for the “target time reached” condition, the client application callback routine will need to either disable the interrupt handling, invoke the API to set the target time to a different value, or change the system timer value.
Intel® IXP400 Software Access-Layer Components: Time Sync (IxTimeSyncAcc) API Figure 99. Polling for Auxiliary Snapshot Values Client Application IEEE 1588 Hardware Assist IxTimeSyncAcc ixTimeSyncAccSystemTimeSet (systemTime ) ixTimeSyncAccTickRateSet (tickRate) Set system time Set frequency scaling factor GPIO input assert Aux. Timestamp Captured and Event Set ixTimeSyncAccAuxTimePoll (auxMode,*auxTime) Get auxiliary time of the desired mode B4391-01 April 2005 292 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: UART-Access (IxUARTAcc) API 21 This chapter describes the Intel® IXP400 Software v2.0’s “UART-Access API” access-layer component. 21.1 What’s New There are no changes or enhancements to this component in software release 2.0. 21.2 Overview The UARTs of the Intel® IXP4XX Product Line of Network Processors and IXC1100 Control Plane Processor have been modeled on the industry standard 16550 UART.
Intel® IXP400 Software Access-Layer Components: UART-Access (IxUARTAcc) API • • • • • • 21.4 UART IOCTL Baud rate set/get Parity Number of stop bits Character length 5, 6, 7, 8 Enable/disable hardware flow control for Clear to Send (CTS) and Request to Send (RTS) signals UART / OS Dependencies The UART device driver is an API than can be used to transmit/receive data from either of the two UART ports on the processor.
Intel® IXP400 Software Access-Layer Components: UART-Access (IxUARTAcc) API 21.5 Dependencies Figure 100. UART Services Models Standard RTOS UART Services IxUartAcc Service Model Supported RTOS User Application User Application OS Serial Driver IXP400 Access Layer Components OS I/O Services API IxUartAcc UART Registers / FIFOs UART Registers / FIFOs B2445-01 Programmer’s Guide IXP400 Software Version 2.
Intel® IXP400 Software This page is intentionally left blank. April 2005 296 IXP400 Software Version 2.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API 22 This chapter describes the Intel® IXP400 Software v2.0’s “USB Access API” access-layer component. 22.1 What’s New There are no changes or enhancements to this component in software release 2.0. 22.2 Overview The Intel® IXP4XX Product Line of Network Processors and IXC1100 Control Plane Processors’ USB hardware components comply with the 1.1 version of the Universal Serial Bus (USB) standard. 22.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API Endpoint 0, by default, is used only to communicate control transactions to configure the UDC after it is reset or hooked up (physically connected to an active USB host or hub). Endpoint 0’s responsibilities include: • Connection • Address assignment • Endpoint configuration • Bus enumeration • Disconnect The USB protocol uses differential signaling between the two pins for half-duplex data transmission. A 1.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API Data packets follow token packets, and are used to transmit data between the host and UDC. There are two types of data packets as specified by the PID: DATA0 and DATA1. These two types are used to provide a mechanism to guarantee data sequence synchronization between the transmitter and receiver across multiple transactions. During the handshake phase, both communicate and agree which data token type to transmit first.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API The eight possible types of bulk transactions based on data direction, error, and stall conditions are shown in Table 54. (Packets sent by the UDC to the host are highlighted in boldface type. Packets sent by the host to the UDC are not boldfaced.) Table 54.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API Table 56. Control Transaction Formats, Set-Up Stage Action Token Packet Data Packet Handshake Packet UDC successfully received control from host Setup DATA0 ACK UDC temporarily unable to receive data Setup DATA0 NAK UDC endpoint needs host intervention Setup DATA0 STALL UDC detected PID, CRC, or bit stuff error Setup DATA0 None NOTE: Packets from UDC to host are boldface.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API 22.4 ixUSB API Interfaces Table 59. API interfaces Available for Access Layer API Description ixUSBDriverInit Initialize driver and USB Device Controller. ixUSBDeviceEnable Enable or disable the device. ixUSBEndpointStall Enable or disable endpoint stall. ixUSBEndpointClear Free all Rx/Tx buffers associated with an endpoint. ixUSBSignalResume Trigger signal resuming on the bus.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API • Data transfer type — Standard — Class — Vendor • Data recipient — Device — Interface — Endpoint — Other • • • • Number of bytes to transfer Index or offset Value: Used to pass a variable-sized data parameter Device request The UDC decodes most commands with no intervention required by the ixUSB client. Other setup requests occur through the setup callback.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API Table 60. Host-Device Request Summary (Sheet 2 of 2) Request Name SET_DESCRIPTOR Sets existing descriptors or adds new descriptors. Existing descriptors include: † • Device • Configuration • Interface • Endpoint GET_DESCRIPTOR Returns the specified descriptor, if it exists. • String SET_INTERFACE Selects an alternate setting for the UDC’s interface.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API 22.4.1.2 Frame Synchronization The SYNCH_FRAME request is used by isochronous endpoints that use implicit-pattern synchronization. The isochronous endpoints may need to track frame numbers in order to maintain synchronization. Isochronous-endpoint transactions may vary in size, according to a specific repeating pattern. The host and endpoint must agree on which frame begins the repeating pattern.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API Figure 103. STALL on OUT Transactions Host Device OUT Data STALL Bus Request Reply B2419-01 The second case of a STALL handshake is known as a “protocol stall” and is unique to control pipes. Protocol stall differs from functional stall in meaning and duration. A protocol STALL is returned during the Data or Status stage of a control transfer, and the STALL condition terminates at the beginning of the next control transfer (Setup).
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API Table 61.
Intel® IXP400 Software Access-Layer Components: USB Access (ixUSB) API 22.5 USB Data Flow The USB device is a memory mapped device on the processor’s peripheral bus. It will not interact directly with the NPEs. Any data path between USB and other components must be performed via the Intel XScale core. 22.6 USB Dependencies The USB device driver is a self-contained component with no interactions with other data components. Figure 104 shows the dependencies for this USD component. Figure 104.
Intel® IXP400 Software Codelets 23 Codelets This chapter describes the Intel® IXP400 Software v2.0 codelets. 23.1 What’s New The following changes and enhancements were made to the codelets in software release 2.0: • Two new codelets have been added. One for demonstrating IxTimeSyncAcc, the other for IxParityENAcc. 23.2 Overview The codelets are example code that utilize the access-layer components and operating system abstraction layers discussed in the preceding chapters.
Intel® IXP400 Software Codelets 23.4 Crypto Access Codelet (IxCryptoAccCodelet) This codelet demonstrates how to use the IxCrypto access-layer component and the underlying security features in the Intel® IXP4XX product line and IXC1100 control plane processors. IxCryptoAccCodelet runs through the scenarios of initializing the NPEs and Queue Manager, context registration, and performing a variety of encryption (3DES, AES, ARC4), decryption, and authentication (SHA1, MD5) operations.
Intel® IXP400 Software Codelets — Configuring Port-1 to automatically transmit frames and Port-2 to receive frames. Frames generated and transmitted in Port-1 are looped back into Port-2 by using cross-over cable. — Configuring and performing a software loopback on each of the two Ethernet ports. — Configuring both ports to act as a bridge so that frames received on one port are retransmitted on the other. • Ethernet management services: — Adding and removing static/dynamic entries.
Intel® IXP400 Software Codelets 23.10 Performance Profiling Codelet (IxPerfProfAccCodelet) IxPerfProfAccCodelet is a useful utility that demonstrates how to access performance related data provided by IxPerfProfAcc. The codelet provides an interface to view north, south, and SDRAM bus activity, event counting and idle cycles from the Intel XScale core PMU and other performance attributes of the processor. Note: 23.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) 24.1 24 What’s New There are no changes or enhancements to this component in software release 2.0. 24.2 Overview An Operating System Services Abstraction Layer (OSAL) is provided as part of the Intel® IXP400 Software v2.0 architecture. Figure 105 shows the OSAL architecture. The OSAL provides a very thin set of abstracted operating-system services. All other access-layer components abstract their OS dependencies to this layer.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) Figure 105. OSAL Architecture I/O Memory & Endianness Buffer Management OS-Independent Component Core Platform-Specific Extensions OS specific Core Linux* VxWorks* Buffer Management I/O Memory & Endianness Translation OSBUF<->IXP_BUF IXP400 Backward Compaibility PlatformSpecific Extensions OS-Dependent Component IXP400 v1.4 Users B3808-001 April 2005 314 IXP400 Software Version 2.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) 24.3 OS-Independent Core Module As shown in Figure 105, the OS-independent component includes all the core functionality such as buffer management, platform- and module-specific OS-independent implementations, I/O memory map function implementations, and OSAL core services implementations. The Buffer Management module defines a memory buffer structure and functions for creating and managing buffer pools.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) 24.4.1 Backward Compatibility Module The OSAL layer was developed during IXP400 software v1.5 development and provides backward compatibility to IXP400 software releases prior to v1.5. To minimize the code change to the current IXP400 software code base, the OSAL layer provides support for major ossl/osServices APIs used in v1.4. Users are strongly recommended to use the OSAL APIs for compatibility with future versions.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) configuration header file. The OSAL configuration header file (IxOsalConfig.h) contains usereditable fields for module inclusion, and it automatically includes the module-specific header files for optional modules, such as the buffer management (IxOsalBufferMgt.h), I/O memory mapping and Endianness support (IxOsalIoMem.h). Platform configuration is done in IxOsalConfig.h by including the main platform header file (IxOsalOem.h).
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) Figure 106. OSAL Directory Structure Ix O s a l.h ( to p O S A L in c lu d e f ile ) Ix O s a lC o n f ig .h , Ix O s a lT y p e s .h e tc .
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) 24.6 OSAL Modules and Related Interfaces This section contains a summary of the types, symbols, and public functions declared by each OSAL module. Note: 24.6.1 The items shaded in light gray are subject to special platform package support, as described in the API notes for these items and the platform package requirements of each module.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) IPC Threads Memory Interrupts Symbols Types Table 62.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) Timers Logging Time Thread synchronization Table 62.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) 24.6.2 Buffer Management Module This module defines a memory buffer structure and functions for creating and managing buffer pools. Table 63 provides an overview of the buffer management module. Functions Types Table 63. OSAL Buffer Management Interface 24.6.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) The OSAL layer also provides APIs for dealing with the following situations: • Transparently accessing I/O-memory-mapped hardware in different endian modes • Transparently accessing SDRAM memory between any endian type and big endian, for the purpose of sharing data with big-endian auxiliary processing engines The OSAL layer supports the following endianness modes: • Big endian • Little endian • Little endian address coherent where — Core i
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) Mixed endian systems I/O Read/Write Table 64. OSAL I/O Memory and Endianness Interface (Sheet 2 of 2) 24.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) • IX_OSAL_IO_ENDIANESS This selects the I/O endianness type required by the component. This can be: — Big endian (IX_OSAL_BE) — Little endian (IX_OSAL_LE). In this mode users cannot access IoMem macros such as IX_OSAL_READ_LONG, IX_OSAL_WRITE_LONG, etc., and must declare coherency mode before using them; see Section 24.8.
Intel® IXP400 Software Operating System Abstraction Layer (OSAL) { IX_STATIC_MAP, IXP123_PCI_CFG_BASE_PHYS, IXP123_PCI_CFG_REGION_SIZE, IXP123_PCI_CFG_BASE_VIRT, NULL, NULL, 0, IX_OSAL_BE, "pciConfig" /* type */ /* physicalAddress */ /* size */ /* virtualAddress */ /* mapFunction */ /* unmapFunction */ /* refCount */ /* coherency */ /* name */ }, #elif defined IX_OSAL_VXWORKS /* Global 1:1 big endian and little */ { IX_STATIC_MAP, 0x00000000, 0xFFFFFFFF, 0x00000000, NULL, NULL, 0, IX_OSAL_BE | IX_OSAL_LE
Intel® IXP400 Software ADSL Driver 25 This chapter describes the ADSL driver for the Intel® IXDP425 / IXCDP1100 Development Platform and Intel® IXDP465 Development Platform that supports the STMicroelectronics* (formally Alcatel*) MTK-20150 ADSL chipset in the ADSL Termination Unit-Remote (ATU-R) mode of operation. The ADSL driver is provided as a separate package along with the Intel® IXP400 Software v2.0. 25.1 What’s New There are no changes or enhancements to this component in software release 2.0.
Intel® IXP400 Software ADSL Driver 25.3.1 Controlling STMicroelectronics* ADSL Modem Chipset Through CTRL-E The STMicroelectronics ADSL chipset CTRL-E interface is memory-mapped into the processor’s expansion bus address space. Figure 107 shows how the chipset is connected to the processor. Figure 107.
Intel® IXP400 Software ADSL Driver Figure 108. Example of ADSL Line Open Call Sequence 1. ixAdslLineStateChangeCallbackRegister (lineNum, lineChangeCallbackFn) ADSL Client Line Open 2. ixAdslLineOpen (lineNum, lineType, phyType) ADSL Driver ADSL Chipset B2404-01 Step 1 of Figure 108 is only required if the client application wants to be notified when a line state changes occurs. Step 2 of Figure 108 is called by the client application to establish an ATU-R ADSL connection with another modem.
Intel® IXP400 Software ADSL Driver 25.6 Limitations and Constraints • The driver only supports the ATU-R mode of operation. • The driver can operate in single PHY mode only. April 2005 330 IXP400 Software Version 2.
Intel® IXP400 Software I2C Driver (IxI2cDrv) 26 This chapter describes the I2C Driver provided with Intel® IXP400 Software v2.0, which is for use with the Intel® IXP46X Product Line of Network Processors. 26.1 What’s New This is a new component for software release 2.0. 26.2 Introduction The IXP46X network processors include an I2C hardware interface.
Intel® IXP400 Software I2C Driver (IxI2cDrv) • Enable/disable the driving of the SCL line • I2C slave address of the processor The I2C driver features the following hardware and bus status items: • • • • • • • • • • • • • Master transfer error Bus error detected Slave address detected General call address detected IDBR receive full IDBR transmit empty Arbitration loss detected Slave STOP detected I2C bus busy I2C unit busy Received/sent status for ACK/NACK Read/write mode (master-transmit/slave-receive o
Intel® IXP400 Software I2C Driver (IxI2cDrv) Figure 109. I2C Driver Dependencies I 2C Adapter Module Linux* VxWorks* Client Application Access-Layer Interface I 2 C Driver (Algorithm Module ) IxOSAL H ardware Interface I 2C Interface B4374-01 26.3.
Intel® IXP400 Software I2C Driver (IxI2cDrv) Once an arbitration loss error is detected, the unit will stop transmitting. The client will need to call the transfer again and the I2C status register will be checked to determine the busy status of the I2C bus. If the bus is not busy, the transfer that occurred before the bus arbitration loss error will be resubmitted. 26.3.3.2 Bus Error This error occurs when the I2C unit, as a master transmitter, does not receive an ACK in response to transmission.
Intel® IXP400 Software I2C Driver (IxI2cDrv) Slave-Interrupt Mode When the processor is acting in I2C slave mode or responding to general calls in interrupt mode, the client callbacks for transmit and receive are responsible for providing a buffer used to interface with the I2C Data Buffer Register (IDBR), using the ixI2cDrvSlaveOrGenCallBufReplenish() function. Examples of Slave Interrupt mode operations is provided in “Example Sequence Flows for Slave Mode” on page 336.
Intel® IXP400 Software I2C Driver (IxI2cDrv) 26.4.2 Example Sequence Flows for Slave Mode Figure 110.
Intel® IXP400 Software I2C Driver (IxI2cDrv) Figure 111.
Intel® IXP400 Software I2C Driver (IxI2cDrv) Figure 112.
Intel® IXP400 Software I2C Driver (IxI2cDrv) Figure 113.
Intel® IXP400 Software This page is intentionally left blank. April 2005 340 IXP400 Software Version 2.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Endianness in Intel® IXP400 Software 27 27.1 Overview The Intel® IXP4XX Product Line of Network Processors and IXC1100 Control Plane Processor support Little-Endian (LE) and Big-Endian (BE) operations. This chapter discusses IXP400 software support for LE and BE operation. This chapter is intended for software engineers developing software or board-support packages (BSPs) that are reliant on endianness support in the processor.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Figure 114. 32-Bit Formats 31 0 0x00 Memory Byte 3 Byte 2 Byte 1 Byte 0 32-bit Little-endian memory 31 0 0x00 Memory Byte 0 Byte 1 Byte 2 Byte 3 32-bit Big-endian memory It should also be noted that endianness only applies when byte and half-word accesses are made to memory.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Unfortunately, the answer is NO even with help from the most sophisticated hardware. 27.3 Software Considerations and Implications Much literature is available explaining the software dependency on underlying hardware endianness.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Little:0x8 Big:0x0 The following provides another example of endianness causing the code to be interpreted differently on BE versus LE machines: int myString[2] = { 0x61626364,0}; Printf(“%s\n”, (char *)&myString); /* hex values for ascii */ Depending on the endianness of the processor the code is executing on, the result is: Little:“dcba” Big:“abcd” 27.3.1.
Intel® IXP400 Software Endianness in Intel® IXP400 Software We always assume that the byte order value will be set to either Big-Endian or Little-Endian in a define value. 27.3.2 Best Practices in Coding of Endian-Independence Avoid • Code that assumes the ordering of data types in memory. • Casting between different-sized types. Do • Perform any endian-sensitive data accesses in macros. If the machine is Big-Endian, the macros will not have a performance hit.
Intel® IXP400 Software Endianness in Intel® IXP400 Software #define htonl(A) #define ntohs(A) #define ntohl(A) #elif defined(LITTLE_ENDIAN) #define htons(A) #define htonl(A) (((A) (((A) (((A) #define ntohs #define ntohl (A) (A) (A) /* the value of A will be byte swapped */ ((((A) & 0xff00) >> 8) | ((A) & 0x00ff) << 8)) ((((A) & 0xff000000) >> 24) | \ & 0x00ff0000) >> 8) | \ & 0x0000ff00) << 8) | \ & 0x000000ff) << 24)) htons htohl #else #error "One of BIG_ENDIAN or LITTLE_ENDIAN must be #defined.
Intel® IXP400 Software Endianness in Intel® IXP400 Software This chapter will provide an overview of the IXP4XX product line and IXC1100 control plane processors capabilities related to endinness. For specific detail on the various capabilities and hardware settings for the processors, refer to that processor’s specific DataSheet and Developer’s Manual. Figure 115 details the endianness of the different blocks of the IXP4XX processors when running a Big-Endian software release. Figure 115.
Intel® IXP400 Software Endianness in Intel® IXP400 Software 27.4.1 Supporting Little-Endian Mode The following hardware items can be configured by software: • Intel XScale core running under Little- or Big-Endian mode. • The byte-swapping hardware in the PCI controller turned on or off. The following hardware items cannot be changed by software or off-chip hardware (i.e. board design): • AHB bus is running under Big-Endian mode.
Intel® IXP400 Software Endianness in Intel® IXP400 Software When the Intel XScale core is in Little-Endian Address Coherent mode, words written by the Intel XScale core are in the same format when read by the NPE as words. However, byte accesses appear reversed and half-word accesses return the other half-word of the word. The benefit of this mode is that if the Intel XScale core is writing a 32-bit address to memory, the NPE could read that address correctly without having to do any conversion.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Figure 116.
Intel® IXP400 Software Endianness in Intel® IXP400 Software MCR p15,0,a1,c1,c0,0 ENDM The application code built to run on the system must be compiled to match the endianness. Some compilers generate code in Little-Endian mode by default. To produce the object code that is targeted for a Big-Endian system, the compiler must be instructed to work in Big-Endian mode. For example, a -mbig-endian switch must be specified for GNU* CC since the default operation is in Little-endian.
Intel® IXP400 Software Endianness in Intel® IXP400 Software 27.4.3.5 PCI Bus Swap The PCI controller has a byte lane swapping feature. The “swap” is controlled via the PCI_CSR register’s PDS and ADS bits within the PCI controller. The swap feature needs to be enabled if the Intel XScale core is in Big-Endian mode or Data Coherent Little-Endian mode. For further details, refer to the processor’s specific DataSheet and Developer’s Manual. Note: 27.4.3.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Table 66.
Intel® IXP400 Software Endianness in Intel® IXP400 Software When adding support for Little-Endian, there were two factors taken into account in deciding where to use Address Coherency and Data Coherency Little-Endian modes. 1. The initial IXP400 software releases and Board Support Packages were all Big-Endian. 2. IXP400 software support for Little-Endian was required to operate on all the supported LittleEndian operating systems.
Intel® IXP400 Software Endianness in Intel® IXP400 Software — Performance Monitoring Unit — Interrupt Controller — GPIO Controller — Timer Block — SSP, I2C and IEEE 1588 units on the IXP46X product line. • Blocks controlled by IXP400 software: — NPE Message Handler and Execution control registers — Ethernet MAC control — Universal Serial Bus (USB) The APB peripherals are placed in Address Coherent mode to nullify changes from the existing Big-Endian BSP. 27.5.
Intel® IXP400 Software Endianness in Intel® IXP400 Software 27.5.3.2 NPE Downloader — IxNpeDl This component utilizes the NPEs’ Message Handler and Execution Control registers. All registers are word-wide (32 bits). Such registers are best set up using Little-Endian Address Coherent mode. However, this would cause the component to have differing behavior between some operating systems. As a result, the decision was made to make the NPE Execution Control registers Data Coherent.
Intel® IXP400 Software Endianness in Intel® IXP400 Software 27.5.3.4.1 Data Plane The data plane interface for IxEthAcc uses the IxQMgr component to send/receive messages between the Ethernet access and the Ethernet NPEs. All messages transferred are word-wide (32bit) messages. These messages are modified by the underlying access layer because the AHB Queue Manager hardware FIFOs are mapped using Data Coherent Little-Endian (as described in “Queue Manager — IxQMgr” on page 355).
Intel® IXP400 Software Endianness in Intel® IXP400 Software Figure 117. Ethernet Frame (Continued)(Big-Endian) TTL Protocol Header Checksum src-ip[0] src-ip[1] src-ip[2] src-ip[3] dst-ip[0] dst-ip[1] dst-ip[2] dst-ip[3] UDP/TCP Header 803.2 Destination MAC Address 802.3 Source MAC Address 802.3 Type Internet Protocol UDP/TCP Header The IP stack typically has an alignment restriction on the IP packet.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Figure 118. One Half-Word-Aligned Ethernet Frame (Continued)(LE Address Coherent) dst-ip[3] dst-ip[2] dst-ip[1] dst-ip[0] UDP/TCP Header 803.2 Destination MAC Address 802.3 Source MAC Address 802.3 Type Internet Protocol UDP/TCP Header The code below provides the read-out formation after the application of a conversion macro. Effectively, the header comes in as Big-Endian and is then output as Little-Endian.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Figure 119. Intel XScale® Core Read of IP Header (LE Data Coherent) (Continued) 16-bit Identif (swapped) flag/Fragment offset (swap) TTL Protocol header checksum(swapped) src-ip[3] Src-ip[2] src-ip[1] src-ip[0] dst-ip[3] Dst-ip[2] dst-ip[1] dst-ip[0] UDP/TCP Header 803.2 Destination MAC Address 802.3 Source MAC Address 802.
Intel® IXP400 Software Endianness in Intel® IXP400 Software • IX_OSAL_MBUF word pointers must be swapped prior to submission to the NPE. (ixEthAccPortTxFrameSubmit()) Note: The IX_OSAL_MBUF chain is walked and all IX_OSAL_MBUFs in a chain are updated. (ixEthAccPortRxFreeReplenish()) • IX_OSAL_MBUF word pointers are swapped on reception from the NPE before calling: — User functions registered via ixEthAccPortTxDoneCallbackRegister. — User function registered via ixEthAccTxBufferDoneCallbackRegister.
Intel® IXP400 Software Endianness in Intel® IXP400 Software \vxworks\include\platforms\ixp400 \IxOsalOsIxp400CustomizedMappings.h. Further information on the VxWorks memory map is available in the VxWorks BSP documentation for the supported development platforms. Depending on their implementations, other operating systems may provide similar files/documents. The macros shown in “Intel® IXP400 Software Macros” on page 362 are provided for use in the IXP400 software components.
Intel® IXP400 Software Endianness in Intel® IXP400 Software Control is transferred from the bootrom into VxWorks once it is downloaded via FTP. The MMU is disabled during this transition and, as such, all SDRAM is in Address Coherent mode. The SDRAM can only be converted to Data Coherent once the MMU is enabled. The MMU is enabled in usrConfig code. The first opportunity to swap the SDRAM to Data Coherent is in hardware init syshwInit0().
Intel® IXP400 Software Endianness in Intel® IXP400 Software Enable Instr & Data cache. Enable Branch Target buffer. return A similar implementation was required for execution in the VxWorks bootrom. The only caveat is that the SDRAM used to load the VxWorks image must be kept in Address Coherent mode, as execution control will be transferred to that image with the MMU disabled. 27.5.