-
Notifications
You must be signed in to change notification settings - Fork 197
auto_link_bandwidth_test
These tests validate the functionality of the BGP auto-generated link-bandwidth extended community feature, which is configured using OpenConfig. The primary goal is to verify that the DUT correctly performs Weighted Equal-Cost Multi-Path (wECMP) forwarding based on the dynamic bandwidth of its BGP paths.
The tests ensure that a DUT correctly generates and attaches the link-bandwidth extended community to routes learned from eBGP peers. They also validate that separate IPv4 and IPv6 traffic flows are load-balanced across these BGP paths in direct proportion to the bandwidths indicated. The plan covers LAG dynamics, the hold-down timer, transitivity configuration, and precedence rules for both peer-advertised communities and local configuration hierarchy (neighbor vs. peer-group).
Testbed Type: atedut5.
The test requires a single DUT with at least 5 ports connected to an ATE/OTG with at least 5 ports. The topology consists of one traffic-source ATE port and two sets of BGP peer links, each configured as a LAG.
-
ATE Port 1 <--> DUT Port 1: Used as the source for the test traffic. -
ATE Ports 2,3 <--> DUT Ports 2,3: FormsLAG-1for the eBGP session with Peer 1. -
ATE Ports 4,5 <--> DUT Ports 4,5: FormsLAG-2for the eBGP session with Peer 2.
- gNMI: Used for all configuration and telemetry operations.
- eBGP: Used for advertising routes from ATE Peers 1 and 2 to the DUT.
- **LACP: **Used for logically grouping physical links between DUT and Peers 1 and 2.
This feature enables wECMP by automatically attaching a BGP extended community to learned routes, with a value representing the real-time bandwidth of the ingress interface. The BGP decision process can then use this bandwidth value as a weight.
This test validates this functionality by creating a classic wECMP scenario:
- ATE Peer 1 (on
LAG-1) and ATE Peer 2 (onLAG-2) advertise the same destination prefix to the DUT, creating two paths. - The DUT, with
auto-link-bandwidthenabled on the peer-group, attaches an LBW community to the routes learned from each peer. - ATE Source (on
Port 1) sends traffic to the destination prefix. - The tests measure the traffic received by ATE Peer 1 and ATE Peer 2 to verify that the DUT load-balanced the traffic in proportion to the bandwidths specified in the LBW communities.
{
"network-instances": {
"network-instance": [
{
"name": "DEFAULT",
"protocols": {
"protocol": [
{
"identifier": "BGP",
"name": "BGP",
"bgp": {
"global": {
"config": {
"as": 65501,
"router-id": "192.0.2.1"
},
"afi-safis": {
"afi-safi": [
{
"afi-safi-name": "IPV4_UNICAST",
"config": {
"afi-safi-name": "IPV4_UNICAST",
"enabled": true
},
"use-multiple-paths": {
"ebgp": {
"config": {
"maximum-paths": 2
}
}
}
},
{
"afi-safi-name": "IPV6_UNICAST",
"config": {
"afi-safi-name": "IPV6_UNICAST",
"enabled": true
},
"use-multiple-paths": {
"ebgp": {
"config": {
"maximum-paths": 2
}
}
}
}
]
}
},
"peer-groups": {
"peer-group": [
{
"peer-group-name": "ATE-PEERS-V4",
"auto-link-bandwidth": {
"import": {
"config": {
"enabled": true,
"transitive": false
}
}
}
},
{
"peer-group-name": "ATE-PEERS-V6",
"auto-link-bandwidth": {
"import": {
"config": {
"enabled": true,
"transitive": false
}
}
}
}
]
},
"neighbors": {
"neighbor": [
{
"neighbor-address": "192.0.2.2",
"config": {
"peer-group": "ATE-PEERS-V4"
}
},
{
"neighbor-address": "192.0.2.4",
"config": {
"peer-group": "ATE-PEERS-V4"
}
},
{
"neighbor-address": "2001:db8::2",
"config": {
"peer-group": "ATE-PEERS-V6"
}
},
{
"neighbor-address": "2001:db8::4",
"config": {
"peer-group": "ATE-PEERS-V6"
}
}
]
}
}
}
]
}
}
]
}
}
This test plan covers the following OpenConfig paths under /network-instances/network-instance/protocols/protocol/bgp/:
peer-groups/peer-group/auto-link-bandwidth/import/config/enabled
peer-groups/peer-group/auto-link-bandwidth/import/config/hold-down-time
peer-groups/peer-group/auto-link-bandwidth/import/config/transitive
neighbors/neighbor/auto-link-bandwidth/import/config/enabled
neighbors/neighbor/auto-link-bandwidth/import/config/hold-down-time
neighbors/neighbor/auto-link-bandwidth/import/config/transitive
- Objective: Verify that with two healthy, equal-sized LAGs, traffic is balanced proportionally (50/50) for both IPv4 and IPv6.
-
Procedure Details: Both LAGs consist of 2 member ports of the same speed. Both ATE Peer 1 and Peer 2 advertise prefix
P1(IPv4) andP2(IPv6).
| Test Case | Procedure | Validation |
| 7.51.1.1 - Equal Balancing (IPv4) | 1. Establish both eBGP sessions (IPv4 AF) over LAG-1 and LAG-2.
2. On DUT, configure a peer-group for IPv4 and enable
3. ATE Peers 1 & 2 advertise IPv4 prefix
4. ATE Source sends a baseline rate of IPv4 traffic to |
Data Plane (Primary): Verify that ATE Peer 1 and ATE Peer 2 each receive approximately 50% of the sent traffic (+/- 5% tolerance).
Control Plane (Optional): If DUT streaming telemetry is enabled, query the |
| 7.51.1.2 - Equal Balancing (IPv6) | 1. Establish both eBGP sessions (IPv6 AF) over LAG-1 and LAG-2.
2. On DUT, configure a peer-group for IPv6 and enable
3. ATE Peers 1 & 2 advertise IPv6 prefix
4. ATE Source sends a baseline rate of IPv6 traffic to |
Data Plane (Primary): Verify that ATE Peer 1 and ATE Peer 2 each receive approximately 50% of the sent traffic (+/- 5% tolerance).
Control Plane (Optional): Verify the |
- Objective: Verify that when one LAG's capacity is reduced, the DUT adjusts the traffic split proportionally for both IPv4 and IPv6.
-
Procedure Details: Both LAGs consist of 2 member ports of the same speed. Both ATE Peer 1 and Peer 2 advertise prefix
P1(IPv4) andP2(IPv6).
| Test Case | Procedure | Validation |
| 7.51.2.1 - Unequal Balancing | 1. From the state in TC 7.51.1.1, disable one member port on LAG-1 (link to ATE Peer 1).
|
Data Plane (Primary): Verify the IPv4 traffic is re-balanced to a 1:2 ratio. ATE Peer 1 should receive ~33% and ATE Peer 2 should receive ~67% of the sent traffic (+/- 5% tolerance).
Control Plane (Optional): Verify the |
| 7.51.2.2 - Restore Balancing | 1. Re-enable the member port on LAG-1.
|
Data Plane (Primary): Verify the IPv4 traffic split returns to an equal 50/50 balance (+/- 5% tolerance).
Control Plane (Optional): Verify the |
- Objective: Verify wECMP re-balancing when starting from a 1-member LAG baseline and adding capacity.
-
Procedure Details: This test starts by configuring
LAG-1andLAG-2with only one member port each.
| Test Case | Procedure | Validation |
| 7.51.3.1 - Baseline 1:1 Balancing | 1. Configure LAG-1 with 1 member port (e.g., DUT Port 2) and LAG-2 with 1 member port (e.g., DUT Port 4).
2. Establish BGP sessions (IPv4 & IPv6). Enable
3. ATE Peers 1 & 2 advertise 4. Send IPv4 and IPv6 traffic. |
Data Plane (Primary): Verify both IPv4 and IPv6 traffic streams are balanced 50/50 (+/- 5% tolerance) between Peer 1 and Peer 2. |
| 7.51.3.2 - Capacity Addition (1:1 -> 1:2) | 1. From the state in 7.51.7.1, add a second member port to LAG-2 (e.g., DUT Port 5).
|
Data Plane (Primary): Verify both IPv4 and IPv6 traffic streams re-balance to a 1:2 ratio, with ~33.3% to Peer 1 and ~66.7% to Peer 2 (+/- 5% tolerance).
Control Plane (Optional): Verify the LBW community for Peer 2's path is updated to 2x link speed, while Peer 1's remains 1x. |
| 7.51.3.3 - Capacity Addition (1:2 -> 2:2) | 1. From the state in 7.51.7.2, add a second member port to LAG-1 (e.g., DUT Port 3).
|
Data Plane (Primary): Verify both IPv4 and IPv6 traffic streams re-balance to a 50/50 split (+/- 5% tolerance).
Control Plane (Optional): Verify the LBW community for Peer 1's path is updated to 2x link speed. |
- Objective: Verify the asymmetric hold-down timer logic (immediate update on link-down, delayed update on link-up) for both IPv4 and IPv6 traffic.
-
Procedure Details: Both LAGs consist of 2 member ports of the same speed. Both ATE Peer 1 and Peer 2 advertise prefix
P1(IPv4) andP2(IPv6).
| Test Case | Procedure | Validation |
| 7.51.4.1 - Link Down (Immediate Update - IPv4) | 1. From state TC 7.51.1.1, disable a member port on LAG-1.
|
Data Plane (Primary): Verify the traffic split shifts to the unbalanced 1:2 ratio (+/- 5% tolerance) immediately (without any hold-down delay). |
| 7.51.4.2 - Link Up (Delayed Update - IPv4) | 1. From state in TC 7.51.3.1, configure hold-down-time: 30 on the DUT peer-group. 2. Re-enable the failed member port on LAG-1.
|
Data Plane (Primary): Verify the traffic split remains in the unbalanced 1:2 ratio (+/- 5% tolerance) for the full 30s. After the timer expires, verify the split returns to the balanced 50/50 state. |
| 7.51.4.3 - Transient Flap (IPv4) | 1. From state TC 7.51.1.1, configure a 30s hold-down-time on the peer-group. 2. Disable a member port on LAG-1 for 5s, then re-enable it.
|
Data Plane (Primary):
1. Verify traffic shifts to 1:2 (+/- 5% tolerance) immediately when the link goes down. 2. Verify traffic remains at 1:2 (+/- 5% tolerance) for the full 30s after the link comes up. 3. Verify traffic returns to 50/50 after the timer expires. |
| 7.51.4.4 - Link Down (Immediate Update - IPv6) | 1. From state TC 7.51.1.2, disable a member port on LAG-1.
|
Data Plane (Primary): Verify the IPv6 traffic split shifts to the unbalanced 1:2 ratio (+/- 5% tolerance) immediately (without any hold-down delay). |
| 7.51.4.5 - Link Up (Delayed Update - IPv6) | 1. From state in TC 7.51.4.4, configure hold-down-time: 30 on the DUT peer-group for IPv6.
2. Re-enable the failed member port on |
Data Plane (Primary): Verify the IPv6 traffic split remains in the unbalanced 1:2 ratio (+/- 5% tolerance) for the full 30s. After the timer expires, verify the split returns to the balanced 50/50 state. |
| 7.51.4.6 - Transient Flap (IPv6) | 1. From state TC 7.51.1.2, configure a 30s hold-down-time on the peer-group for IPv6.
2. Disable a member port on |
Data Plane (Primary):
1. Verify IPv6 traffic shifts to 1:2 (+/- 5% tolerance) immediately when the link goes down. 2. Verify traffic remains at 1:2 (+/- 5% tolerance) for the full 30s after the link comes up. 3. Verify traffic returns to 50/50 (+/- 5% tolerance) after the timer expires. |
- Objective: Verify that a link-bandwidth community advertised by an eBGP peer takes precedence over the locally auto-generated one for both IPv4 and IPv6.
-
Procedure Details: Both LAGs consist of 2 member ports of the same speed. Both ATE Peer 1 and Peer 2 advertise prefix
P1(IPv4) andP2(IPv6).
| Test Case | Procedure | Validation |
| 7.51.5.1 - Peer-Advertised Precedence (IPv4) | 1. Establish BGP sessions (IPv4 AF). On DUT, enable auto-link-bandwidth on the peer-group.
2. Configure ATE Peer 1 to advertise
3. Configure ATE Peer 2 to advertise
4. ATE Source sends IPv4 traffic to |
Data Plane (Primary): Verify IPv4 traffic is forwarded in a 1:2 ratio, with ~33% going to Peer 1 and ~67% going to Peer 2 (+/- 5% tolerance). This confirms the DUT used the lower, peer-advertised value for Peer 1's path and the higher, auto-generated value for Peer 2's path.
Control Plane (Optional): Verify the |
| 7.51.5.2 - Peer-Advertised Precedence (IPv6) | 1. Establish BGP sessions (IPv6 AF). On DUT, enable auto-link-bandwidth on the peer-group.
2. Configure ATE Peer 1 to advertise
3. Configure ATE Peer 2 to advertise
4. ATE Source sends IPv6 traffic to |
Data Plane (Primary): Verify IPv6 traffic is forwarded in a 1:2 ratio, with ~33% going to Peer 1 and ~67% going to Peer 2 (+/- 5% tolerance).
Control Plane (Optional): Verify the |
-
Objective: Verify that per-neighbor
auto-link-bandwidthconfiguration overrides a disabled setting inherited from its peer-group for both IPv4 and IPv6. -
Procedure Details: Both LAGs consist of 2 member ports of the same speed. Both ATE Peer 1 and Peer 2 advertise prefix
P1(IPv4) andP2(IPv6).
| Test Case | Procedure | Validation |
| 7.51.6.1 - Neighbor Override of Disabled Peer-Group (IPv4) | 1. From state TC 7.51.1.1, configure enabled: false under the auto-link-bandwidth hierarchy for the peer-group. 2. Configure enabled: true under the auto-link-bandwidth hierarchy for both IPv4 neighbors (on LAG-1 and LAG-2).
|
Data Plane (Primary): Verify that IPv4 traffic is forwarded in a 50/50 ratio towards ATE Peer 1 and ATE Peer 2 (+/- 5% tolerance). This confirms wECMP is active because both neighbor configurations overrode the disabled peer-group setting. Control Plane (Optional): Verify that the routes to P1 via both Peer 1 and Peer 2 do have a valid ext-community-index.
|
| 7.51.6.2 - Neighbor Override of Disabled Peer-Group (IPv6) | 1. From state TC 7.51.1.2, configure enabled: false under the auto-link-bandwidth hierarchy for the peer-group. 2. Configure enabled: true under the auto-link-bandwidth hierarchy for both IPv6 neighbors (on LAG-1 and LAG-2).
|
Data Plane (Primary): Verify that IPv6 traffic is forwarded in a 50/50 ratio towards ATE Peer 1 and ATE Peer 2 (+/- 5% tolerance). Control Plane (Optional): Verify that the routes to P2 via both Peer 1 and Peer 2 do have a valid ext-community-index.
|
-
Objective: Verify that the DUT correctly generates a non-transitive community when
transitive: falseis explicitly configured for both IPv4 and IPv6. - Procedure Details: This is a control-plane focused test.
| Test Case | Procedure | Validation |
| 7.51.7.1 - Non-Transitive Behavior (IPv4) | 1. From state TC 7.51.1.1, ensure transitive: false is configured on the peer-group.
2. ATE advertises prefix |
Control Plane (Optional): If DUT telemetry is enabled, query the ext-community-index for the route to P1. Verify the index points to an extended community that is correctly formatted as non-transitive.
|
| 7.51.7.2 - Non-Transitive Behavior (IPv6) | 1. From state TC 7.51.1.2, ensure transitive: false is configured on the peer-group.
2. ATE advertises prefix |
Control Plane (Optional): If DUT telemetry is enabled, query the ext-community-index for the route to P2. Verify the index points to an extended community that is correctly formatted as non-transitive.
|
paths:
# configuration
/network-instances/network-instance/protocols/protocol/bgp/peer-groups/peer-group/auto-link-bandwidth/import/config/enabled:
/network-instances/network-instance/protocols/protocol/bgp/peer-groups/peer-group/auto-link-bandwidth/import/config/hold-down-time:
/network-instances/network-instance/protocols/protocol/bgp/peer-groups/peer-group/auto-link-bandwidth/import/config/transitive:
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/auto-link-bandwidth/import/config/enabled:
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/auto-link-bandwidth/import/config/hold-down-time:
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/auto-link-bandwidth/import/config/transitive:
# telemetry
/interfaces/interface/subinterfaces/subinterface/state/counters/in-octets:
/network-instances/network-instance/protocols/protocol/bgp/rib/afi-safis/afi-safi/ipv4-unicast/neighbors/neighbor/adj-rib-in-post/routes/route/state/ext-community-index:
/network-instances/network-instance/protocols/protocol/bgp/rib/afi-safis/afi-safi/ipv6-unicast/neighbors/neighbor/adj-rib-in-post/routes/route/state/ext-community-index:
/network-instances/network-instance/afts/next-hop-groups/next-hop-group/next-hops/next-hop/state/weight:
/interfaces/interface/state/oper-status:
/interfaces/interface/aggregation/state/member:
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state/session-state:
rpcs:
gnmi:
gNMI.Set:
gNMI.Subscribe:-
Home
- Test Plans
- ACCTZ-1.1: Record Subscribe Full
- ACCTZ-2.1: Record Subscribe Partial
- ACCTZ-3.1: Record Subscribe Non-gRPC
- ACCTZ-4.1: Record History Truncation
- ACCTZ-4.2: Record Payload Truncation
- ACCTZ-5.1: gNSI.acctz.v1 (Accounting) Test RecordSubscribe Idle Timeout - client becomes silent
- ACCTZ-6.1: gNSI.acctz.v1 (Accounting) Test RecordSubscribe Idle Timeout - DoA client
- ACCTZ-7.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Failure - Multi-transaction
- ACCTZ-8.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Failure - Uni-transaction
- ACCTZ-9.1: gNSI.acctz.v1 (Accounting) Test Accounting Privilege Escalation
- ACCTZ-10.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Error - Multi-transaction
- ACL-1.1: ACL match based on L3/L4 fields and DSCP value
- ACL-1.2: ACL Update (Make-before-break)
- ACL-1.3: Large Scale ACL with TCAM profile
- AFT-1.1: AFTs Base
- AFT-1.2: AFTs slow collector
- AFT-1.3: AFTs collector Flap
- AFT-2.1: AFTs Prefix Counters
- AFT-3.1: AFTs Atomic Flag Check
- AFT-5.1: AFTs DUT Reboot
- attestz-1: General enrollz and attestz tests
- Authz: General Authz (1-4) tests
- BMP-1.1: BMP Session Establishment and Telemetry Test
- BMP-2.7: BMP Pre Policy Test
- BMP-2.8: BMP Post Policy Test
- bootz: General bootz bootstrap tests
- Certz-1: gNSI Client Certificate Tests
- Certz-2: Server Certificate
- Certz-3: Server Certificate Rotation
- Certz-4: Trust Bundle
- Certz-5: Trust Bundle Rotation
- CFM-1.1: CFM over ETHoCWoMPLSoGRE
- CNTR-1: Basic container lifecycle via
gnoi.Containerz. - CNTR-2: Container network connectivity tests
- CPT-1.1: Interface based ARP policer
- Credentialz-1: Password console login
- Credentialz-2: SSH Password Login Disallowed
- Credentialz-3: Host Certificates
- Credentialz-4: SSH Public Key Authentication
- Credentialz-5: Hiba Authentication
- DP-1.2: QoS policy feature config
- DP-1.3: QoS ECN feature config
- DP-1.4: QoS Interface Output Queue Counters
- DP-1.5: Egress Strict Priority scheduler with bursty traffic
- DP-1.7: One strict priority queue traffic test
- DP-1.8: Two strict priority queue traffic test
- DP-1.9: WRR traffic test
- DP-1.10: Mixed strict priority and WRR traffic test
- DP-1.11: Bursty traffic test
- DP-1.12: ECN enabled traffic test
- DP-1.13: DSCP and ECN bits are copied over during IPinIP encap and decap
- DP-1.14: QoS basic test
- DP-1.15: Egress Strict Priority scheduler
- DP-1.16: Ingress traffic classification and rewrite
- DP-1.17: DSCP Transparency with ECN
- DP-1.19: Egress traffic DSCP rewrite
- DP-2.2: QoS scheduler with 1 rate 2 color policer, classifying on next-hop group
- DP-2.4: Police traffic on input matching all packets using 1 rate, 2 color marker
- DP-2.5: Police traffic on input matching all packets using 2 rate, 3 color marker
- DP-2.6: Police traffic on input matching all packets using 2 rate, 3 color marker with classifier
- enrollz-1: enrollz test for TPM 2.0 HMAC-based Enrollment flow
- enrollz-2: enrollz test for TPM 1.2 Enrollment flow
- example-0.1: Topology Test
- FNT: Carrier Transitions Test
- FP-1.1: Power admin DOWN/UP Test
- gNMI-1.1: cli Origin
- gNMI-1.2: Benchmarking: Full Configuration Replace
- gNMI-1.3: Benchmarking: Drained Configuration Convergence Time
- gNMI-1.4: Telemetry: Inventory
- gNMI-1.5: Telemetry: Port Speed Test
- gNMI-1.6: System gRPC Servers running in more than one network-instance
- gNMI-1.8: Configuration Metadata-only Retrieve and Replace
- gNMI-1.9: Get requests
- gNMI-1.10: Telemetry: Basic Check
- gNMI-1.11: Telemetry: Interface Packet Counters
- gNMI-1.12: Mixed OpenConfig/CLI Origin
- gNMI-1.13: Optics Telemetry, Instant, threshold, and miscellaneous static info
- gNMI-1.14: OpenConfig metadata consistency during large config push
- gNMI-1.15: Set Requests
- gNMI-1.16: Fabric redundnacy test
- gNMI-1.17: Controller card redundancy test
- gNMI-1.18: gNMI subscribe with sample mode for backplane capacity counters
- gNMI-1.19: ConfigPush and ConfigPull after Control Card switchover
- gNMI-1.20: Telemetry: Optics Thresholds
- gNMI-1.21: Integrated Circuit Hardware Resource Utilization Test
- gNMI-1.22: Controller card port attributes
- gNMI-1.23: Telemetry: Aggregate Interface Counters
- gNMI-1.24: gNMI Leaf-List Update Test
- gNMI-1.25: Telemetry: Interface Last Change Timestamp
- gNMI-1.27: gNMI Sample Mode Test
- GNMI-2: gnmi_subscriptionlist_test
- gNOI-2.1: Packet-based Link Qualification on 100G and 400G links
- gNOI-3.1: Complete Chassis Reboot
- gNOI-3.2: Per-Component Reboot
- gNOI-3.3: Supervisor Switchover
- gNOI-3.4: Chassis Reboot Status and Reboot Cancellation
- gNOI-4.1: Software Upgrade
- gNOI-5.1: Ping Test
- gNOI-5.2: Traceroute Test
- gNOI-5.3: Copying Debug Files
- gNOI-6.1: Factory Reset
- gNOI-7.1: BootConfig
- gNPSI-1: Sampling and Subscription Check
- HA-1.0: Telemetry: Firewall High Availability.
- Health-1.1: Generic Health Check
- Health-1.2: Healthz component status paths
- INT-1.1: Interface Performance
- IPSEC-1.1: IPSec with MACSec over aggregated links.
- IPSEC-1.2: IPSec Scaling with MACSec over aggregated links.
- IPSEC-1.3: IPSec Packet-Order with MACSec over aggregated links.
- MGT-1: Management HA solution test
- MPLS-1.1: MPLS label blocks using ISIS
- MPLS-1.2: MPLS Traffic Class Marking
- MPLS-2.2: MPLS forwarding via static LSP to BGP next-hop.
- MTU-1.3: Large IP Packet Transmission
- MTU-1.4: Large IP Packet through GRE/GUE tunnel Transmission
- MTU-1.5: Path MTU handing
- OC-1.2: Default Address Families
- OC-26.1: Network Time Protocol (NTP)
- P4RT-1.1: Base P4RT Functionality
- P4RT-1.2: P4RT Daemon Failure
- P4RT-1.3: P4RT behavior when a device/node is dowm
- P4RT-2.1: P4RT Election
- P4RT-2.2: P4RT Metadata Validation
- P4RT-3.1: Google Discovery Protocol: PacketIn
- P4RT-3.2: Google Discovery Protocol: PacketOut
- P4RT-3.21: Google Discovery Protocol: PacketOut with LAG
- P4RT-5.1: Traceroute: PacketIn
- P4RT-5.2: Traceroute Packetout
- P4RT-5.3: Traceroute: PacketIn With VRF Selection
- P4RT-6.1: Required Packet I/O rate: Performance
- P4RT-7.1: LLDP: PacketIn
- P4RT-7.2: LLDP: PacketOut
- PF-1.1: IPv4/IPv6 policy-forwarding to indirect NH matching DSCP/TC.
- PF-1.2: Policy-based traffic GRE Encapsulation to IPv4 GRE tunnel
- PF-1.3: Policy-based IPv4 GRE Decapsulation
- PF-1.4: GUEv1 Decapsulation rule using destination-address-prefix-set and TTL and DSCP behavior test
- PF-1.6: Policy based VRF selection for IPV4/IPV6
- PF-1.7: Decapsulate MPLS in GRE and UDP
- PF-1.8: Ingress handling of TTL
- PF-1.9: Egress handling of TTL
- PF-1.11: Rewrite the ingress innner packet TTL
- PF-1.12: MPLSoGRE IPV4 decapsulation of IPV4/IPV6 payload
- PF-1.13: MPLSoGRE IPV4 decapsulation of IPV4/IPV6 payload scale test
- PF-1.14: MPLSoGRE IPV4 encapsulation of IPV4/IPV6 payload
- PF-1.15: MPLSoGRE IPV4 encapsulation of IPV4/IPV6 payload scale test
- PF-1.16: MPLSoGRE IPV4 encapsulation IPV4/IPV6 local proxy test
- PF-1.17: MPLSoGRE and MPLSoGUE MACsec
- PF-1.18: MPLSoGRE and MPLSoGUE QoS
- PF-1.19: MPLSoGUE IPV4 decapsulation of IPV4/IPV6 payload
- PF-1.20: MPLSoGUE IPV4 decapsulation of IPV4/IPV6 payload scale test
- PF-1.21: Configurable IPv6 flow labels corresponding to IPV6 tunnels
- PF-1.22: GUEv1 Decapsulation and ECMP test for IPv4 and IPv6 payload
- PF-1.23: EthoCWoMPLSoGRE IPV4 forwarding of IPV4/IPV6 payload
- PF-1.24: Add and remove interface bound to PBF
- PF-2.3: Multiple VRFs and GUE DECAP in Default VRF
- PLT-1.1: Interface breakout Test
- PLT-1.2: Parent component validation test
- PLT-1.3: OnChange Subscription Test for Breakout Interfaces
- Replay-1.0: Record/replay presession test
- Replay-1.1: Record/replay diff command trees test
- Replay-1.2: P4RT Replay Test
- RT-1.1: Base BGP Session Parameters
- RT-1.2: BGP Policy & Route Installation
- RT-1.3: BGP Route Propagation
- RT-1.4: BGP Graceful Restart
- RT-1.5: BGP Prefix Limit
- RT-1.7: Local BGP Test
- RT-1.8: BGP Route Reflector Test at scale
- RT-1.10: BGP Keepalive and HoldTimer Configuration Test
- RT-1.11: BGP remove private AS
- RT-1.12: BGP always compare MED
- RT-1.14: BGP Long-Lived Graceful Restart
- RT-1.15: BGP Addpath on scale with and without routing policy
- RT-1.19: BGP 2-Byte and 4-Byte ASN support
- RT-1.21: BGP TCP MSS and PMTUD
- RT-1.23: BGP AFI SAFI OC DEFAULTS
- RT-1.24: BGP 2-Byte and 4-Byte ASN support with policy
- RT-1.25: Management network-instance default static route
- RT-1.26: Basic static route support
- RT-1.27: Static route to BGP redistribution
- RT-1.28: BGP to IS-IS redistribution
- RT-1.29: BGP chained import/export policy attachment
- RT-1.30: BGP nested import/export policy attachment
- RT-1.31: BGP 3 levels of nested import/export policy with match-set-options
- RT-1.32: BGP policy actions - MED, LocPref, prepend, flow-control
- RT-1.33: BGP Policy with prefix-set matching
- RT-1.34: BGP route-distance configuration
- RT-1.35: BGP Graceful Restart Extended route retention (ExRR)
- RT-1.51: BGP multipath ECMP
- RT-1.52: BGP multipath UCMP support with Link Bandwidth Community
- RT-1.53: prefix-list test
- RT-1.54: BGP Override AS-path split-horizon
- RT-1.55: BGP session mode (active/passive)
- RT-1.63: BGP Multihop
- RT-1.64: BGP Import/Export Policy (Control plane only) Functional Test Case
- RT-1.65: BGP scale test
- RT-1.66: IPv4 Static Route with IPv6 Next-Hop
- RT-2.1: Base IS-IS Process and Adjacencies
- RT-2.2: IS-IS LSP Updates
- RT-2.6: IS-IS Hello-Padding enabled at interface level
- RT-2.7: IS-IS Passive is enabled at interface level
- RT-2.8: IS-IS metric style wide not enabled
- RT-2.9: IS-IS metric style wide enabled
- RT-2.10: IS-IS change LSP lifetime
- RT-2.11: IS-IS Passive is enabled at the area level
- RT-2.12: Static route to IS-IS redistribution
- RT-2.13: Weighted-ECMP for IS-IS
- RT-2.14: IS-IS Drain Test
- RT-2.15: IS-IS Extensions for Segment Routing
- RT-2.16: IS-IS Graceful Restart Helper
- RT-3.1: Policy based VRF selection
- RT-3.2: Multiple <Protocol, DSCP> Rules for VRF Selection
- RT-3.52: Multidimensional test for Static GUE Encap/Decap based on BGP path selection and selective DSCP marking
- RT-3.53: Static route based GUE Encapsulation to IPv4 tunnel
- RT-4.10: AFTs Route Summary
- RT-4.11: AFTs Route Summary
- RT-5.1: Singleton Interface
- RT-5.2: Aggregate Interfaces
- RT-5.3: Aggregate Balancing
- RT-5.4: Aggregate Forwarding Viable
- RT-5.5: Interface hold-time
- RT-5.6: Interface Loopback mode
- RT-5.7: Aggregate Not Viable All
- RT-5.8: IPv6 Link Local
- RT-5.9: Disable IPv6 ND Router Arvetisment
- RT-5.10: IPv6 Link Local generated by SLAAC
- RT-5.11: LACP Intervals
- RT-5.12: Suppress IPv6 ND Router Advertisement [Depreciated]
- RT-5.13: Flow control test
- RT-6.1: Core LLDP TLV Population
- RT-7.1: BGP default policies
- RT-7.2: BGP Policy Community Set
- RT-7.3: BGP Policy AS Path Set
- RT-7.4: BGP Policy AS Path Set and Community Set
- RT-7.5: BGP Policy - Match and Set Link Bandwidth Community
- RT-7.6: BGP Link Bandwidth Community - Cumulative
- RT-7.8: BGP Policy Match Standard Community and Add Community Import/Export Policy
- RT-7.9: BGP ECMP for iBGP with IS-IS protocol nexthop
- RT-7.10: Routing policy statement insertion and removal
- RT-7.11: BGP Policy - Import/Export Policy Action Using Multiple Criteria
- RT-7.51: BGP Auto-Generated Link-Bandwidth Community
- RT-8: Singleton with breakouts
- RT-10.1: Default Route Generation based on 192.0.0.0/8 Presence
- RT-10.2: Non-default Route Generation based on 192.168.2.2/32 Presence in ISIS
- RT-14.2: GRIBI Route Test
- SEC-3.1: Authentication
- SFLOW-1: sFlow Configuration and Sampling
- SR-1.1: Transit forwarding to Node-SID via ISIS
- SR-1.2: Egress Node Forwarding for MPLS traffic with Explicit Null label
- Storage-1.1: Storage File System Check
- SYS-1.1: Test default COPP policy thresholds for Arista
- SYS-2.1: Ingress control-plane ACL.
- SYS-3.1: AAA and TACACS+ Configuration Verification Test Suite
- System-1.1: System banner test
- System-1.2: System g protocol test
- System-1.3: System hostname test
- System-1.4: System time test
- System-1.5: System software-version test
- TE-1.1: Static ARP
- TE-1.2: My Station MAC
- TE-2.1: gRIBI IPv4 Entry
- TE-2.2: gRIBI IPv4 Entry With Aggregate Ports
- TE-3.1: Base Hierarchical Route Installation
- TE-3.2: Traffic Balancing According to Weights
- TE-3.3: Hierarchical weight resolution
- TE-3.5: Ordering: ACK Received
- TE-3.6: ACK in the Presence of Other Routes
- TE-3.7: Base Hierarchical NHG Update
- TE-3.31: Hierarchical weight resolution with PBF
- TE-4.1: Base Leader Election
- TE-4.2: Persistence Mode
- TE-5.1: gRIBI Get RPC
- TE-6.1: Route Removal via Flush
- TE-6.2: Route Removal In Non Default VRF
- TE-6.3: Route Leakage between Non Default VRF
- TE-8.1: DUT Daemon Failure
- TE-8.2: Supervisor Failure
- TE-9.2: MPLS based forwarding Static LSP
- TE-9.3: FIB FAILURE DUE TO HARDWARE RESOURCE EXHAUST
- TE-9: gRIBI MPLS Compliance
- TE-10: gRIBI MPLS Forwarding
- TE-11.1: Backup NHG: Single NH
- TE-11.2: Backup NHG: Multiple NH
- TE-11.3: Backup NHG: Actions
- TE-11.21: Backup NHG: Multiple NH with PBF
- TE-11.31: Backup NHG: Actions with PBF
- TE-13.1: gRIBI route ADD during Failover
- TE-13.2: gRIBI route DELETE during Failover
- TE-14.1: gRIBI Scaling
- TE-14.2: encap and decap scale
- TE-15.1: gRIBI Compliance
- TE-16.1: basic encapsulation tests
- TE-16.2: encapsulation FRR scenarios
- TE-16.3: encapsulation FRR scenarios
- TE-17.1: VRF selection policy driven TE
- TE-18.1: gRIBI MPLS-in-UDP Encapsulation
- TE-18.3: MPLS in UDP Encapsulation Scale Test
- TE-18.4: ECMP hashing on outer and inner packets with MPLSoUDP encapsulation
- TestID-16.4: gRIBI to BGP Route Redistribution for IPv4
- TR-6.1: Remote Syslog feature config
- TR-6.2: Local logging destinations
- TRANSCEIVER-1.1: Telemetry: 400ZR Chromatic Dispersion(CD) telemetry values streaming
- TRANSCEIVER-1.2: Telemetry: 400ZR_PLUS Chromatic Dispersion(CD) telemetry values streaming
- TRANSCEIVER-3.1: Telemetry: 400ZR Optics firmware version streaming
- TRANSCEIVER-3.2: Telemetry: 400ZR_PLUS Optics firmware version streaming
- TRANSCEIVER-4.1: Telemetry: 400ZR RX input and TX output power telemetry values streaming.
- TRANSCEIVER-4.2: Telemetry: 400ZR_PLUS RX input and TX output power telemetry values streaming.
- TRANSCEIVER-5.1: Configuration: 400ZR channel frequency, output TX launch power and operational mode setting.
- TRANSCEIVER-5.2: Configuration: 400ZR_PLUS channel frequency, output TX launch power and operational mode setting.
- TRANSCEIVER-6.1: Telemetry: 400ZR Optics performance metrics (pm) streaming.
- TRANSCEIVER-6.2: Telemetry: 400ZR_PLUS Optics performance metrics (pm) streaming.
- TRANSCEIVER-7.1: Telemetry: 400ZR Optics inventory info streaming
- TRANSCEIVER-7.2: Telemetry: 400ZR_PLUS Optics inventory info streaming
- TRANSCEIVER-8.1: Telemetry: 400ZR Optics module temperature streaming.
- TRANSCEIVER-8.2: Telemetry: 400ZR_PLUS Optics module temperature streaming.
- TRANSCEIVER-9.1: Telemetry: 400ZR TX laser bias current telemetry values streaming.
- TRANSCEIVER-9.2: Telemetry: 400ZR_PLUS TX laser bias current telemetry values streaming.
- TRANSCEIVER-10.1: Telemetry: 400ZR Optics FEC(Forward Error Correction) Uncorrectable Frames Streaming.
- TRANSCEIVER-10.2: Telemetry: 400ZR_PLUS Optics FEC(Forward Error Correction) Uncorrectable Frames Streaming.
- TRANSCEIVER-11.1: Telemetry: 400ZR Optics logical channels provisioning and related telemetry.
- TRANSCEIVER-11.2: Telemetry: 400ZR_PLUS Optics logical channels provisioning and related telemetry.
- TRANSCEIVER-12.1: Telemetry: 400ZR Transceiver Supply Voltage streaming.
- TRANSCEIVER-12.2: Telemetry: 400ZR_PLUS Transceiver Supply Voltage streaming.
- TRANSCEIVER-13.1: Configuration: 400ZR Transceiver Low Power Mode Setting.
- TRANSCEIVER-13.2: Configuration: 400ZR_PLUS Transceiver Low Power Mode Setting.
- TRANSCEIVER-101: Telemetry: ZR platform OC paths streaming.
- TRANSCEIVER-102: Telemetry: ZR terminal-device OC paths streaming.
- TRANSCEIVER-103: Telemetry: ZR Plus platform OC paths streaming.
- TRANSCEIVER-104: Telemetry: ZR Plus terminal-device OC paths streaming.
- TUN-1.3: Interface based IPv4 GRE Encapsulation
- TUN-1.4: Interface based IPv6 GRE Encapsulation
- TUN-1.6: Tunnel End Point Resize for Ecapsulation - Interface Based GRE Tunnel
- TUN-1.9: GRE inner packet DSCP
- URPF-1.1: uRPF validation from non-default network-instance
- Test Plans