User`s guide
Agilent PXT Wireless Communications Test Set
User’s Guide
226
Refer to the Cat 3 MCS/RB Table
in the E2E Setup and Benchmarking Guide section of this manual for the
maximum Cat 3 setup.
Faulty RF cable or connectors.
If all other attempts have failed to resolve this issue, try swapping out the RF connectors / cables for new
tested cables. Sometimes RF cables become faulty and produce unexpected RF results.
Performance of E2E data is not as expected – high IP packet loss on high end
bitrate tests
If PXT IP statistics show no packet loss, and bitrate on PXT IP or DTCH screen is as expected, but the E2E data
throughput is much lower than expected, the chances are high that there is packet loss occurring.
Using Iperf or a similar tool, Run UDP E2E streams and check for UDP packet loss on receive side of Iperf. This
can be a sign that the UE Host PC / PXT Server or mobile UE Iperf endpoint is under resourced. Iperf is known
to be CPU hungry on receiver side when latency is slower than LAN performance and bitrate is nearing Cat 3
limits.
Check the endpoint’s performance using a resource monitor and ensure CPU is not 90%+ and swap out the UE
host PC (if running USB UE with Host) if this is case of CPU topping out for more powerful PC and retest.
Another check to perform with this condition is verify the PXT IP environment is 1000BaseT / Gbps
router/switch setup. Running with 100BaseT / 100Mbps router at PXT IP interface will result in packet loss
with high end throughputs.
Performance of E2E data is not as expected – TCP performance poor.
If PXT IP statistics show no packet loss, and running Iperf UDP checks at maximum theoretical rate shows
good performance, with no packet loss. Attempting TCP benchmark shows poor bitrate performance on both
PXT and E2E receive side. For example expected rate might be 75-100Mbps, this is perhaps observed for short
period perhaps for a couple of seconds of time and performance seriously drops to a much lower rate.
First check PXT configuration to ensure the PXT PHY UL Resource Allocation is set to FIXED MAC Padding.
This helps the returned TCP ACKs efficiency as there is no waits for scheduling reports.
If FIXED MAC Padding is set, and this does not resolve the poor TCP performance, another issue is that the
test station TCP stream bitrate limit might have been reached (due to Windows TCP-window sizes, buffer limits,
and/or latency). These limits can be overcome by running multiple TCP streams in parallel threads to maximize
the TCP throughput.
Iperf offers this ability with the –P option. Example below.
PXT server (192.168.1.230) > iperf –s –i1 –p5052 –w20m
UE Host Client PC (192.168.1.51) > iperf –c 192.168.1.230 –i1 –w20m –t300 –p5052 –P4
This example starts four parallel streams in separate threads. The number of parallel threads to start is
dependent upon the target bitrate you wish to achieve, and the environment. If running multiple streams helps,
play around with the number to get the optimum parallel processes for your test. Typically, you might see one
stream achieve 10 Mbps, but if you require 50 Mbps bandwidth, you would need to start 5-6 parallel processes
using the –P option. This TCP performance is more likely to be observerd in TDD mode.
If other third party tools are used to drive E2E, for example: FTP, simply increase the FTP sessions in a similar
fashion to achieve the same results.