Specifications
•
A Packet Forwarding Engine memory leak is seen when multicast receivers are
connected in a bridge domain where IGMP snooping is enabled and IGMP messages
exchanged between the multicast receivers and the layer 3 IRB (Integrated Routing
and Bridging) interface. PR1027473
•
Aggregated Ethernet interface does not send PPPoE client echo reply when ae interface
bundle spans multiple FPC(s). PR1031218
•
On MX Series 3D MPC, when there is a congested Packet Forwarding Engine destination,
the non-congested Packet Forwarding Engine destinations might experience an
unexpected packet drop. PR1033071
•
sa-multicast load sharing method under [chassis <> fpc <> pic <> forwarding-mode]
is not working on 100GE interface on MX Series FPC. PR1035180
•
The micro BFD sessions won't come up if incoming untagged micro BFD packets contain
a source MAC where the last 12 bits are zero. PR1035295
•
Presence of /8 prefix in two terms results in incorrect filter processing and unexpected
behavior. PR1042889
•
When IRB interface is configured with VRRP in layer 2 VPLS/bridge-domain, in corner
cases IRB interface may not respond to ARP request targeting to IRB sub-interface IP
address. PR1043571
•
In a scaled subscriber management environment, the output of CLI command "show
subscribers" and its sub flavors might print more pages and has to be terminated by
"Ctrl+c" or "q". But this was not closing the back end Session Database (SDB)
connection properly. Over a period of time, this will cause inconsistency and the
subscriber management infrastructure daemon (smid) fails to register and no new
subscribers could connect. PR1045820
•
On T4000 and FPC Type 5-3D or TXP-3D platforms , BFD sessions operating in
100msec interval with default multiplier of 3 might randomly flap after the
enhancements implemented via PR967013. BFD sessions with lower intervals of
100msec or higher intervals are not exposed. The internal FPC thread, monitoring the
High Speed Fabric links had a run time of longer then 100 msec. PR1047229
•
By default, after 16x10GE MPC boards come up, about 75% of queues were allocated
to support rich queuing with MQ chip. Such allocation causes MQ driver software
module to poll stats. Polling stats causes this rise in CPU usage. PR1048947
Routing Protocols
•
In the multicast environment, in rare condition, after graceful Routing Engine switchover
(GRES) is executed, the rpd process might crash due to receiving NULL incoming logical
interface. PR999085
•
When BGP add-path feature is enabled on BGP route-reflector (RR) router, and if the
RR router has mix of add-path receive-enabled client and add-path receive-disabled
(which is default) client, due to a timing issue, the rpd process on RR might crash when
routes update/withdraw. PR1024813
•
When a BGP peer goes down, the route for this peer should be withdrawn. If it happens
that a enqueued BGP route update for this peer has not been sent out, issuing the CLI
87Copyright © 2015, Juniper Networks, Inc.
Resolved Issues