-
Notifications
You must be signed in to change notification settings - Fork 200
mpls_in_udp_scale
Building on TE-18.1 and TE-18.2, this test focuses on scaling gRIBI-programmed MPLS-over-UDP tunnels and associated forwarding entries, parameterized by key scaling dimensions.
Physical Topology:
- 4 physical ports total (2 DUT ports + 2 ATE ports)
- 2 ports as ingress interfaces (port1-port2)
- 2 ports as egress/uplink interfaces (port3-port4)
Logical Interface Scale Design:
- 32 logical ingress interfaces achieved through:
- 16 VLAN subinterfaces per physical ingress port (2 ports × 16 VLANs = 32 logical interfaces)
- VLAN IDs: 100-115 on port1, 200-215 on port2
- Multiple VRFs mapped to logical interfaces as required by scale profiles
- Each logical interface assigned to appropriate VRF based on test profile requirements
ATE port-1 <------> port-1 DUT (VLANs 100-115)
ATE port-2 <------> port-2 DUT (VLANs 200-215)
DUT port-3 <------> port-3 ATE (Egress)
DUT port-4 <------> port-4 ATE (Egress)
- 32 logical interfaces as the ‘input port set’ (Ingress)
- 2 ports as “uplink facing” (Egress)
- Network Instances (VRFs) will be mapped from ingress ports/subinterfaces as needed by scale profiles.
-
Physical Interface Configuration:
- Configure ports 1-4 with IPv6 addressing using base scheme 2001:f:d:e::/126 network
- Enable all physical interfaces with PMD100GBASEFR-specific settings
- Apply ethernet configuration: AutoNegotiate=false, DuplexMode=FULL, PortSpeed=100GB
- Set MAC addresses using systematic scheme: 02:01:00:00:00:XX for DUT ports
-
VLAN Subinterface Configuration:
- Create 16 VLAN subinterfaces per ingress port (32 total logical interfaces)
- Assign IPv6 addresses using 2001:f:d:e::/126 base with systematic increments
- Configure subinterface-to-VRF mappings based on test profile requirements
- Enable IPv4 protocols on subinterfaces when deviations.InterfaceEnabled(dut) is required
-
VRF Configuration:
- Create required VRFs based on test profile:
- Profile 1: DEFAULT network instance plus 1 non-default VRF
- Profiles 2-3: 1024 VRFs (VRF_001 through VRF_1024) plus DEFAULT
- Profile 4: DEFAULT network instance plus 1 non-default VRF
- Profile 5: DEFAULT network instance plus 1 non-default VRF
- Use device-specific default network instance naming conventions
- Apply policy-based forwarding rules for VRF selection using DSCP/source IP criteria
- Create required VRFs based on test profile:
-
Static Routes and Forwarding:
- Configure static routes using device-specific static protocol naming
- Set up IPv6 static routes with next-hop pointing to ATE port IPv6 addresses
- Use standard static route protocol type configuration
- Configure routes in appropriate network instances based on test profile requirements
-
Physical Port Setup:
- Configure 4 physical ports with IPv6 addresses matching DUT interface scheme
- Use MAC addresses: 02:00:XX:01:01:01 pattern for ATE ports
- Set up VLAN tagging on ingress ports (port1-2) to match DUT subinterface VLANs
- Configure egress ports (port3-4) for traffic reception and MPLS-in-UDP validation
- Apply PMD100GBASEFR-specific settings: disable FEC, set speed to 100Gbps, enable auto-negotiate
-
Traffic Generation:
- Create traffic flows targeting the 20,000 unique destination prefixes
- Use IPv6 flow destination base: 2015:aa8:: as defined in Test Parameters
- Distribute traffic across 32 logical ingress interfaces using VLAN tags
- Configure flows with appropriate DSCP markings for VRF selection
- Set traffic duration: 15 seconds as defined in Test Parameters
-
Packet Capture and Validation:
- Enable packet capture on egress ports for MPLS-in-UDP encapsulation validation
- Configure capture filters for MPLS label stack and UDP encapsulation verification
- Validate outer IPv6 headers: source 2001:f:a:1::0, destination 2001:f:c:e::1 as defined in Test Parameters
- Verify UDP destination port 6635 as defined in Test Parameters
- Check outer DSCP marking: 26 and TTL: 64 as defined in Test Parameters
-
Client Configuration:
- Establish gRIBI client connection with RIB_AND_FIB_ACK: true and Persistence: true
- Use standard gRIBI client configuration pattern
- Make sure the client is the leader
- Set appropriate batch sizes and operation rates per profile requirements
-
Entry Programming Sequence:
- Program Next Hop (NH) entries with MPLS-in-UDP encapsulation headers
- Use NH ID starting from 201 as defined in Test Parameters
- Create Next Hop Groups (NHGs) starting from ID 10 as defined in Test Parameters
- Install IPv6 prefix entries using 2015:aa8::/128 base prefix pattern as defined in Test Parameters
- Validate FIB_PROGRAMMED status for all programmed entries
-
Scale-Specific Configurations:
- Profile 1: DEFAULT network instance with 20,000 NHGs, 1 NH per NHG, 1 MPLS label
- Profiles 2-3: 1024 VRFs with distributed NHGs/prefixes, unique MPLS labels per VRF
- Profile 4: DEFAULT network instance with 2,500 NHGs, 8 NHs per NHG (ECMP), 1 MPLS label
- Profile 5: High-rate programming (1,000 ops/sec) with 50% ADD/50% DELETE operations
-
Device-Specific Considerations:
- Handle vendor-specific gRIBI encapsulation header support limitations
- Use CLI configuration for tunnel encapsulation when gRIBI encap headers unsupported
- Apply device-specific interface enablement requirements for IPv4 protocols
- Configure tunnel type: "mpls-over-udp udp destination port 6635" as defined in Test Parameters
DUT Interface IPv6 Addressing:
- dut_port_base_ipv6 = “2001:f:d:e::/126”
- dut_port1_ipv6 = “2001:f:d:e::1/126”
- dut_port2_ipv6 = “2001:f:d:e::5/126”
- dut_port3_ipv6 = “2001:f:d:e::9/126”
- dut_port4_ipv6 = “2001:f:d:e::13/126”
ATE Interface IPv6 Addressing:
- ate_port1_ipv6 = “2001:f:d:e::2/126”
- ate_port2_ipv6 = “2001:f:d:e::6/126”
- ate_port3_ipv6 = “2001:f:d:e::10/126”
- ate_port4_ipv6 = “2001:f:d:e::14/126”
MAC Address Schemes:
- dut_mac_pattern = “02:01:00:00:00:XX”
- ate_mac_pattern = “02:00:XX:01:01:01”
Inner IPv6 Destinations:
- inner_ipv6_dst_A = “2001:aa:bb::1/128”
- inner_ipv6_dst_B = “2001:aa:bb::2/128”
Inner IPv4 Destinations:
- ipv4_inner_dst_A = “10.5.1.1/32”
- ipv4_inner_dst_B = “10.5.1.2/32”
Outer IPv6 Encapsulation:
- outer_ipv6_src = “2001:f:a:1::0”
- outer_ipv6_dst_A = “2001:f:c:e::1”
- outer_ipv6_dst_B = “2001:f:c:e::2”
- outer_ipv6_dst_def = “2001:1:1:1::0”
- outer_dst_udp_port = “5555”
- outer_dscp = “26”
- outer_ip_ttl = “64”
Traffic Flow Parameters:
- ipv6_flow_base = “2015:aa8::”
- ipv6_prefix_base = “2015:aa8::/128”
Traffic Parameters:
- traffic_duration = “15 seconds”
- target_packet_loss = “≤ 1%”
gRIBI Parameters:
- nh_id_start = “201”
- nhg_id_start = “10”
This test evaluates scaling across the following dimensions using gRIBI. The test profiles below represent different parameter combinations of these dimensions.
- Network Instances (VRFs): Number of separate routing instances.
- Next Hop Groups (NHGs): Total number of NHGs programmed. Target: Up to 20,000 (profile-dependent).
- Next Hops (NHs): Total number of NHs programmed. Constraint: Maximum 20,000 total NHs. When there are more NHs per NHG, there will be fewer total NHGs (e.g., 2,500 NHGs if each NHG has 8 NHs).
- NHs per NHG: Number of NH entries within each NHG (e.g., 1 or 8).
- Prefixes: Total number of unique IPv4/IPv6 exact-match forwarding entries (routes) across all VRFs. Target: 20,000 total.
- (Unique Destination IP + MPLS) Tuples: The combination of the inner destination IP and the MPLS label used in the NH encapsulation. Target: Up to 20,000 unique tuples.
- MPLS Labels: Number and uniqueness of MPLS labels used in NH encapsulation. Constraint: The number of unique MPLS labels must equal the number of VRFs (#MPLS Labels == #VRFs).
- gRIBI Operations Rate (QPS): Rate of gRIBI Modify requests or operations per second.
- gRIBI Batch Size: Number of AFT entries (or operations) per ModifyRequest.
- Convergence: DUT packet forwarding updated within 1 second after receiving FIB_PROGRAMMED acknowledgement for added entries (baseline).
- IP Address Reuse: Inner IP destination prefixes should be reused across different Network Instances where applicable.
- Multi-VRF Distribution: In multi-VRF profiles, both NHGs and prefixes are distributed across the different VRFs as specified in each profile.
Important Note on AFT Entry Placement
A key requirement for all test profiles is the separation of gRIBI-programmed AFT entries. All Next Hop (NH) and Next Hop Group (NHG) entries must be programmed in the DEFAULT network instance. The corresponding IPv4/IPv6 prefix entries must be programmed in their respective, non-default network instances. This implies that even for profiles specified as testing a "Single VRF" scale (Profiles 1, 4, 5), the prefixes will reside in one or more non-default VRFs.
- Goal: Baseline single VRF scale (Exact Label Match scenario).
- Network Instances (VRFs): 1 DEFAULT VRF and 1 non-default VRF.
- Total NHGs: 20,000.
- NHs per NHG: 1.
- MPLS Labels: 1 (consistent with #VRFs = 1). Same label used for all NHs.
- Total Prefixes: 20,000 (e.g., 10k IPv4, 10k IPv6).
- Unique (Dest IP + MPLS) Tuples: 20,000 (different destination IPs, same MPLS label).
- Prefix Mapping: 1 unique prefix -> 1 unique NHG (1:1).
- Total NHs: 20,000 (20,000 NHGs × 1 NH/NHG = 20,000 total NHs).
- gRIBI Rate/Batch: Baseline (e.g., 1 ModifyRequest/sec, 200 entries/request) - QPS not the primary focus here.
- Goal: Scale across multiple VRFs with unique labels per VRF.
- Network Instances (VRFs): 1024.
- Total NHGs: 20,000 (distributed across VRFs, ~19-20 NHGs/VRF).
- NHs per NHG: 1.
- Total NHs: 20,000 (20,000 NHGs × 1 NH/NHG = 20,000 total NHs).
- MPLS Labels: 1024 unique labels (1 label assigned per VRF, consistent with #VRFs = 1024).
- Total Prefixes: 20,000 (distributed across VRFs, ~19-20 prefixes/VRF).
- Unique (Dest IP + MPLS) Tuples: 20,000 (e.g., 20 unique destination IPs reused per MPLS label/VRF).
- Prefix Mapping: Prefixes within a VRF map to NHGs using that VRF’s unique MPLS label.
- Inner IP Reuse: Required.
- gRIBI Rate/Batch: Baseline - QPS not the primary focus here.
- Goal: Similar to Profile 2, but test potentially skewed distribution of prefixes/routes per VRF/label.
- Network Instances (VRFs): 1024.
- Total NHGs: 20,000.
- NHs per NHG: 1.
- Total NHs: 20,000 (20,000 NHGs × 1 NH/NHG = 20,000 total NHs).
- MPLS Labels: 1024 unique labels (1 per VRF).
- Total Prefixes: 20,000.
- Unique (Dest IP + MPLS) Tuples: 20,000.
- Prefix Mapping: Similar to Profile 2, but the distribution of the 20k prefixes across the 1024 VRFs/labels might be intentionally uneven (e.g., some VRFs have many more prefixes than others). Exact skew pattern TBD.
- Inner IP Reuse: Required.
- gRIBI Rate/Batch: Baseline - QPS not the primary focus here.
- Goal: Test ECMP scale within a single VRF.
- Network Instances (VRFs): 1 DEFAULT VRF and 1 non-default VRF.
- Total NHGs: 2,500.
- NHs per NHG: 8 (each NH having a different destination IP).
- Total NHs: 20,000 (2,500 NHGs × 8 NHs/NHG = 20,000 total NHs, respecting the 20k NH constraint).
- MPLS Labels: 1 (consistent with #VRFs = 1). Same label used for all NHs.
- Total Prefixes: 20,000 (e.g., 10k IPv4, 10k IPv6).
- Unique (Dest IP + MPLS) Tuples: 20,000 (different destination IPs across all NHs, same MPLS label).
- Prefix Mapping: 8 unique prefixes -> 1 unique NHG (8:1 mapping, repeated 2500 times).
- gRIBI Rate/Batch: Baseline - QPS not the primary focus here.
-
Goal: Test gRIBI control plane QPS scaling and impact on dataplane. Uses Profile 1 as the base state.
-
Network Instances (VRFs): 1 DEFAULT VRF and 1 non-default VRF.
-
Total NHGs: 20,000.
-
NHs per NHG: 1.
-
MPLS Labels: 1.
-
Total Prefixes: 20,000.
-
Unique (Dest IP + MPLS) Tuples: 20,000.
-
Prefix Mapping: 1:1.
-
Total NHs: 20,000 (20,000 NHGs × 1 NH/NHG = 20,000 total NHs).
-
gRIBI Operations: Program/Modify the full 20k entries (1 Prefix + 1 NHG + 1 NH = 3 operations per entry = 60k operations total).
- Target Rate: 1,000 operations/second (aiming to update the full table in maximum of 60 seconds).
- Operation Mix: Test with 50% ADD, 50% DELETE operations during high-rate phase.
-
Dataplane Validation: Ensure live traffic forwarding remains stable and correct during high-rate gRIBI operations. The primary success criterion is zero packet loss during the update phase. This validates that the DUT correctly implements a “make-before-break” update sequence, where traffic for a modified prefix is seamlessly forwarded using either the old or the new state, without being dropped.
- Program all gRIBI entries (NHs, NHGs, Prefixes) according to the profile using baseline rate/batch.
- Validate
FIB_PROGRAMMEDstatus is received from DUT for all entries. - Verify AFT state on DUT for a sample of entries (NH, NHG, Prefix -> NHG mapping).
- Send traffic matching programmed prefixes from appropriate ingress ports.
- Verify traffic is received on egress ports with correct MPLS-over-UDP encapsulation (correct outer IPs, UDP port, MPLS label).
- Measure packet loss (target: <= 1% steady state).
- Delete all gRIBI entries.
- Verify AFT state shows entries removed.
- Verify traffic loss is 100%.
- Program all gRIBI entries across all specified VRFs according to the profile using baseline rate/batch.
- Validate
FIB_PROGRAMMEDstatus for all entries. - Verify AFT state on DUT for a sample of entries within different VRFs.
- Send traffic matching programmed prefixes, ensuring traffic is directed to the correct VRF (e.g., via appropriate ingress interface mapping).
- Verify traffic is received with correct MPLS-over-UDP encapsulation, including the VRF-specific MPLS label.
- Measure packet loss (target: <= 1% steady state).
- Delete all gRIBI entries.
- Verify AFT state shows entries removed across VRFs.
- Verify traffic loss is 100%.
- Perform Single VRF Validation steps.
- Additionally, verify that traffic sent towards prefixes mapped to the ECMP NHG is distributed across the multiple NHs within that NHG (requires ATE support for flow analysis or DUT counter validation for NH packet/octet counters).
-
Establish the baseline state (e.g., program 20k entries as per Profile 1).
-
Start traffic flows matching the programmed entries. Verify baseline forwarding and low loss.
-
Initiate high-rate gRIBI Modify operations (e.g., 100 ModifyRequests/sec, 10 ops/request, 50% ADD/50% DELETE mix targeting existing/new entries).
-
Monitor gRIBI operation results (ACKs) for success/failure and latency.
-
Continuously monitor traffic forwarding during the high-rate gRIBI phase.
- Verify traffic uses correct encapsulation based on the programmed state.
- Measure packet loss (target: minimal loss, allowing for brief transient loss during updates, but stable low loss overall).
-
Validate
FIB_PROGRAMMEDstatus is received promptly for updates. -
Verify AFT state on DUT reflects the changes made during the high-rate phase.
-
Stop high-rate programming and measure steady-state loss again.
{}paths:
# AFTs Next-Hop state (Verification)
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/state/index:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/state/type:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/mpls/state/mpls-label-stack:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/udp-v6/state/src-ip:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/udp-v6/state/dst-ip:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/udp-v6/state/src-udp-port:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/udp-v6/state/dst-udp-port:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/udp-v6/state/ip-ttl:
/network-instances/network-instance/afts/next-hops/next-hop/encap-headers/encap-header/udp-v6/state/dscp:
/network-instances/network-instance/afts/next-hops/next-hop/state/counters/packets-forwarded:
/network-instances/network-instance/afts/next-hops/next-hop/state/counters/octets-forwarded:
/network-instances/network-instance/afts/next-hops/next-hop/state/ip-address: # NH IP
/network-instances/network-instance/afts/next-hop-groups/next-hop-group/state/id:
/network-instances/network-instance/afts/next-hop-groups/next-hop-group/next-hops/next-hop/state/index:
/interfaces/interface/subinterfaces/subinterface/ipv4/neighbors/neighbor/state/link-layer-address:
# AFTs Prefix Entry state (Verification)
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/next-hop-group:
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/next-hop-group:
rpcs:
gnmi:
gNMI.Set:
union_replace: true
replace: true
# Primarily used for verification (Subscribe/Get)
gNMI.Subscribe:
on_change: true
gNMI.Get:
gribi:
# Used for programming all AFT entries
gRIBI.Modify:
gRIBI.Flush:- FFF
-
Home
- Test Plans
- ACCTZ-1.1: Record Subscribe Full
- ACCTZ-2.1: Record Subscribe Partial
- ACCTZ-3.1: Record Subscribe Non-gRPC
- ACCTZ-4.1: Record History Truncation
- ACCTZ-4.2: Record Payload Truncation
- ACCTZ-5.1: gNSI.acctz.v1 (Accounting) Test RecordSubscribe Idle Timeout - client becomes silent
- ACCTZ-6.1: gNSI.acctz.v1 (Accounting) Test RecordSubscribe Idle Timeout - DoA client
- ACCTZ-7.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Failure - Multi-transaction
- ACCTZ-8.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Failure - Uni-transaction
- ACCTZ-9.1: gNSI.acctz.v1 (Accounting) Test Accounting Privilege Escalation
- ACCTZ-10.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Error - Multi-transaction
- ACL-1.1: ACL match based on L3/L4 fields and DSCP value
- ACL-1.2: ACL Update (Make-before-break)
- ACL-1.3: Large Scale ACL with TCAM profile
- AFT-1.1: AFTs Base
- AFT-1.2: AFTs slow collector
- AFT-1.3: AFTs collector Flap
- AFT-2.1: AFTs Prefix Counters
- AFT-3.1: AFTs Atomic Flag Check
- AFT-5.1: AFTs DUT Reboot
- attestz-1: General enrollz and attestz tests
- Authz: General Authz (1-4) tests
- BMP-1.1: BMP Session Establishment and Telemetry Test
- BMP-2.7: BMP Pre Policy Test
- BMP-2.8: BMP Post Policy Test
- bootz: General bootz bootstrap tests
- CERTZ-1: gNSI Client Certificate Tests
- Certz-2: Server Certificate
- Certz-3: Server Certificate Rotation
- Certz-4: Trust Bundle
- Certz-5: Trust Bundle Rotation
- CFM-1.1: CFM over ETHoCWoMPLSoGRE
- CNTR-1: Basic container lifecycle via
gnoi.Containerz. - CNTR-2: Container network connectivity tests
- CPT-1.1: Interface based ARP policer
- Credentialz-1: Password console login
- Credentialz-2: SSH Password Login Disallowed
- Credentialz-3: Host Certificates
- Credentialz-4: SSH Public Key Authentication
- Credentialz-5: Hiba Authentication
- DP-1.2: QoS policy feature config
- DP-1.3: QoS ECN feature config
- DP-1.4: QoS Interface Output Queue Counters
- DP-1.5: Egress Strict Priority scheduler with bursty traffic
- DP-1.7: One strict priority queue traffic test
- DP-1.8: Two strict priority queue traffic test
- DP-1.9: WRR traffic test
- DP-1.10: Mixed strict priority and WRR traffic test
- DP-1.11: Bursty traffic test
- DP-1.12: ECN enabled traffic test
- DP-1.13: DSCP and ECN bits are copied over during IPinIP encap and decap
- DP-1.14: QoS basic test
- DP-1.15: Egress Strict Priority scheduler
- DP-1.16: Ingress traffic classification and rewrite
- DP-1.17: DSCP Transparency with ECN
- DP-1.19: Egress traffic DSCP rewrite
- DP-2.2: QoS scheduler with 1 rate 2 color policer, classifying on next-hop group
- DP-2.4: Police traffic on input matching all packets using 1 rate, 2 color marker
- DP-2.5: Police traffic on input matching all packets using 2 rate, 3 color marker
- DP-2.6: Police traffic on input matching all packets using 2 rate, 3 color marker with classifier
- enrollz-1: enrollz test for TPM 2.0 HMAC-based Enrollment flow
- enrollz-2: enrollz test for TPM 1.2 Enrollment flow
- example-0.1: Topology Test
- FP-1.1: Power admin DOWN/UP Test
- gNMI-1.1: cli Origin
- gNMI-1.2: Benchmarking: Full Configuration Replace
- gNMI-1.3: Benchmarking: Drained Configuration Convergence Time
- gNMI-1.4: Telemetry: Inventory
- gNMI-1.5: Telemetry: Port Speed Test
- gNMI-1.6: System gRPC Servers running in more than one network-instance
- gNMI-1.8: Configuration Metadata-only Retrieve and Replace
- gNMI-1.9: Get requests
- gNMI-1.10: Telemetry: Basic Check
- gNMI-1.11: Telemetry: Interface Packet Counters
- gNMI-1.12: Mixed OpenConfig/CLI Origin
- gNMI-1.13: Optics Telemetry, Instant, threshold, and miscellaneous static info
- gNMI-1.14: OpenConfig metadata consistency during large config push
- gNMI-1.15: Set Requests
- gNMI-1.16: Fabric redundnacy test
- gNMI-1.17: Controller card redundancy test
- gNMI-1.18: gNMI subscribe with sample mode for backplane capacity counters
- gNMI-1.19: ConfigPush and ConfigPull after Control Card switchover
- gNMI-1.20: Telemetry: Optics Thresholds
- gNMI-1.21: Integrated Circuit Hardware Resource Utilization Test
- gNMI-1.22: Controller card port attributes
- gNMI-1.23: Telemetry: Aggregate Interface Counters
- gNMI-1.24: gNMI Leaf-List Update Test
- gNMI-1.25: Telemetry: Interface Last Change Timestamp
- gNMI-1.26: Carrier Transitions Test
- gNMI-1.27: gNMI Sample Mode Test
- GNMI-2: gnmi_subscriptionlist_test
- gNOI-2.1: Packet-based Link Qualification on 100G and 400G links
- gNOI-3.1: Complete Chassis Reboot
- gNOI-3.2: Per-Component Reboot
- gNOI-3.3: Supervisor Switchover
- gNOI-3.4: Chassis Reboot Status and Reboot Cancellation
- gNOI-4.1: Software Upgrade
- gNOI-5.1: Ping Test
- gNOI-5.2: Traceroute Test
- gNOI-5.3: Copying Debug Files
- gNOI-6.1: Factory Reset
- gNOI-7.1: BootConfig
- gNPSI-1: Sampling and Subscription Check
- HA-1.0: Telemetry: Firewall High Availability.
- Health-1.1: Generic Health Check
- Health-1.2: Healthz component status paths
- INT-1.1: Interface Performance
- IPSEC-1.1: IPSec with MACSec over aggregated links.
- IPSEC-1.2: IPSec Scaling with MACSec over aggregated links.
- IPSEC-1.3: IPSec Packet-Order with MACSec over aggregated links.
- MGT-1: Management HA solution test
- MPLS-1.1: MPLS label blocks using ISIS
- MPLS-1.2: MPLS Traffic Class Marking
- MPLS-2.2: MPLS forwarding via static LSP to BGP next-hop.
- MTU-1.3: Large IP Packet Transmission
- MTU-1.4: Large IP Packet through GRE/GUE tunnel Transmission
- MTU-1.5: Path MTU handing
- OC-1.2: Default Address Families
- OC-26.1: Network Time Protocol (NTP)
- P4RT-1.1: Base P4RT Functionality
- P4RT-1.2: P4RT Daemon Failure
- P4RT-1.3: P4RT behavior when a device/node is dowm
- P4RT-2.1: P4RT Election
- P4RT-2.2: P4RT Metadata Validation
- P4RT-3.1: Google Discovery Protocol: PacketIn
- P4RT-3.2: Google Discovery Protocol: PacketOut
- P4RT-3.21: Google Discovery Protocol: PacketOut with LAG
- P4RT-5.1: Traceroute: PacketIn
- P4RT-5.2: Traceroute Packetout
- P4RT-5.3: Traceroute: PacketIn With VRF Selection
- P4RT-6.1: Required Packet I/O rate: Performance
- P4RT-7.1: LLDP: PacketIn
- P4RT-7.2: LLDP: PacketOut
- PF-1.1: IPv4/IPv6 policy-forwarding to indirect NH matching DSCP/TC.
- PF-1.2: Policy-based traffic GRE Encapsulation to IPv4 GRE tunnel
- PF-1.3: Policy-based IPv4 GRE Decapsulation
- PF-1.4: GUEv1 Decapsulation rule using destination-address-prefix-set and TTL and DSCP behavior test
- PF-1.6: Policy based VRF selection for IPV4/IPV6
- PF-1.7: Decapsulate MPLS in GRE and UDP
- PF-1.8: Ingress handling of TTL
- PF-1.9: Egress handling of TTL
- PF-1.11: Rewrite the ingress innner packet TTL
- PF-1.12: MPLSoGRE IPV4 decapsulation of IPV4/IPV6 payload
- PF-1.13: MPLSoGRE IPV4 decapsulation of IPV4/IPV6 payload scale test
- PF-1.14: MPLSoGRE IPV4 encapsulation of IPV4/IPV6 payload
- PF-1.15: MPLSoGRE IPV4 encapsulation of IPV4/IPV6 payload scale test
- PF-1.16: MPLSoGRE IPV4 encapsulation IPV4/IPV6 local proxy test
- PF-1.17: MPLSoGRE and MPLSoGUE MACsec
- PF-1.18: MPLSoGRE and MPLSoGUE QoS
- PF-1.19: MPLSoGUE IPV4 decapsulation of IPV4/IPV6 payload
- PF-1.20: MPLSoGUE IPV4 decapsulation of IPV4/IPV6 payload scale test
- PF-1.21: Configurable IPv6 flow labels corresponding to IPV6 tunnels
- PF-1.22: GUEv1 Decapsulation and ECMP test for IPv4 and IPv6 payload
- PF-1.23: EthoCWoMPLSoGRE IPV4 forwarding of IPV4/IPV6 payload
- PF-1.24: Add and remove interface bound to PBF
- PF-2.3: Multiple VRFs and GUE DECAP in Default VRF
- PLT-1.1: Interface breakout Test
- PLT-1.2: Parent component validation test
- PLT-1.3: OnChange Subscription Test for Breakout Interfaces
- Replay-1.0: Record/replay presession test
- Replay-1.1: Record/replay diff command trees test
- Replay-1.2: P4RT Replay Test
- RT-1.1: Base BGP Session Parameters
- RT-1.2: BGP Policy & Route Installation
- RT-1.3: BGP Route Propagation
- RT-1.4: BGP Graceful Restart
- RT-1.5: BGP Prefix Limit
- RT-1.7: Local BGP Test
- RT-1.8: BGP Route Reflector Test at scale
- RT-1.10: BGP Keepalive and HoldTimer Configuration Test
- RT-1.11: BGP remove private AS
- RT-1.12: BGP always compare MED
- RT-1.14: BGP Long-Lived Graceful Restart
- RT-1.15: BGP Addpath on scale with and without routing policy
- RT-1.19: BGP 2-Byte and 4-Byte ASN support
- RT-1.21: BGP TCP MSS and PMTUD
- RT-1.23: BGP AFI SAFI OC DEFAULTS
- RT-1.24: BGP 2-Byte and 4-Byte ASN support with policy
- RT-1.25: Management network-instance default static route
- RT-1.26: Basic static route support
- RT-1.27: Static route to BGP redistribution
- RT-1.28: BGP to IS-IS redistribution
- RT-1.29: BGP chained import/export policy attachment
- RT-1.30: BGP nested import/export policy attachment
- RT-1.31: BGP 3 levels of nested import/export policy with match-set-options
- RT-1.32: BGP policy actions - MED, LocPref, prepend, flow-control
- RT-1.33: BGP Policy with prefix-set matching
- RT-1.34: BGP route-distance configuration
- RT-1.35: BGP Graceful Restart Extended route retention (ExRR)
- RT-1.51: BGP multipath ECMP
- RT-1.52: BGP multipath UCMP support with Link Bandwidth Community
- RT-1.53: prefix-list test
- RT-1.54: BGP Override AS-path split-horizon
- RT-1.55: BGP session mode (active/passive)
- RT-1.63: BGP Multihop
- RT-1.64: BGP Import/Export Policy (Control plane only) Functional Test Case
- RT-1.65: BGP scale test
- RT-1.66: IPv4 Static Route with IPv6 Next-Hop
- RT-2.1: Base IS-IS Process and Adjacencies
- RT-2.2: IS-IS LSP Updates
- RT-2.6: IS-IS Hello-Padding enabled at interface level
- RT-2.7: IS-IS Passive is enabled at interface level
- RT-2.8: IS-IS metric style wide not enabled
- RT-2.9: IS-IS metric style wide enabled
- RT-2.10: IS-IS change LSP lifetime
- RT-2.11: IS-IS Passive is enabled at the area level
- RT-2.12: Static route to IS-IS redistribution
- RT-2.13: Weighted-ECMP for IS-IS
- RT-2.14: IS-IS Drain Test
- RT-2.15: IS-IS Extensions for Segment Routing
- RT-2.16: IS-IS Graceful Restart Helper
- RT-3.1: Policy based VRF selection
- RT-3.2: Multiple <Protocol, DSCP> Rules for VRF Selection
- RT-3.52: Multidimensional test for Static GUE Encap/Decap based on BGP path selection and selective DSCP marking
- RT-3.53: Static route based GUE Encapsulation to IPv6 tunnel
- RT-4.10: AFTs Route Summary
- RT-4.11: AFTs Route Summary
- RT-5.1: Singleton Interface
- RT-5.2: Aggregate Interfaces
- RT-5.3: Aggregate Balancing
- RT-5.4: Aggregate Forwarding Viable
- RT-5.5: Interface hold-time
- RT-5.6: Interface Loopback mode
- RT-5.7: Aggregate Not Viable All
- RT-5.8: IPv6 Link Local
- RT-5.9: Disable IPv6 ND Router Arvetisment
- RT-5.10: IPv6 Link Local generated by SLAAC
- RT-5.11: LACP Intervals
- RT-5.12: Suppress IPv6 ND Router Advertisement [Depreciated]
- RT-5.13: Flow control test
- RT-6.1: Core LLDP TLV Population
- RT-7.1: BGP default policies
- RT-7.2: BGP Policy Community Set
- RT-7.3: BGP Policy AS Path Set
- RT-7.4: BGP Policy AS Path Set and Community Set
- RT-7.5: BGP Policy - Match and Set Link Bandwidth Community
- RT-7.6: BGP Link Bandwidth Community - Cumulative
- RT-7.8: BGP Policy Match Standard Community and Add Community Import/Export Policy
- RT-7.9: BGP ECMP for iBGP with IS-IS protocol nexthop
- RT-7.10: Routing policy statement insertion and removal
- RT-7.11: BGP Policy - Import/Export Policy Action Using Multiple Criteria
- RT-7.51: BGP Auto-Generated Link-Bandwidth Community
- RT-8: Singleton with breakouts
- RT-10.1: Default Route Generation based on 192.0.0.0/8 Presence
- RT-10.2: Non-default Route Generation based on 192.168.2.2/32 Presence in ISIS
- RT-14.2: GRIBI Route Test
- SEC-3.1: Authentication
- SFLOW-1: sFlow Configuration and Sampling
- SR-1.1: Transit forwarding to Node-SID via ISIS
- SR-1.2: Egress Node Forwarding for MPLS traffic with Explicit Null label
- Storage-1.1: Storage File System Check
- SYS-1.1: Test default COPP policy thresholds for Arista
- SYS-2.1: Ingress control-plane ACL.
- SYS-3.1: AAA and TACACS+ Configuration Verification Test Suite
- SYS-4.1: System Mount Points State Verification
- System-1.1: System banner test
- System-1.2: System g protocol test
- System-1.3: System hostname test
- System-1.4: System time test
- System-1.5: System software-version test
- TE-1.1: Static ARP
- TE-1.2: My Station MAC
- TE-2.1: gRIBI IPv4 Entry
- TE-2.2: gRIBI IPv4 Entry With Aggregate Ports
- TE-3.1: Base Hierarchical Route Installation
- TE-3.2: Traffic Balancing According to Weights
- TE-3.3: Hierarchical weight resolution
- TE-3.5: Ordering: ACK Received
- TE-3.6: ACK in the Presence of Other Routes
- TE-3.7: Base Hierarchical NHG Update
- TE-3.31: Hierarchical weight resolution with PBF
- TE-4.1: Base Leader Election
- TE-4.2: Persistence Mode
- TE-5.1: gRIBI Get RPC
- TE-6.1: Route Removal via Flush
- TE-6.2: Route Removal In Non Default VRF
- TE-6.3: Route Leakage between Non Default VRF
- TE-8.1: DUT Daemon Failure
- TE-8.2: Supervisor Failure
- TE-9.2: MPLS based forwarding Static LSP
- TE-9.3: FIB FAILURE DUE TO HARDWARE RESOURCE EXHAUST
- TE-9: gRIBI MPLS Compliance
- TE-10: gRIBI MPLS Forwarding
- TE-11.1: Backup NHG: Single NH
- TE-11.2: Backup NHG: Multiple NH
- TE-11.3: Backup NHG: Actions
- TE-11.21: Backup NHG: Multiple NH with PBF
- TE-11.31: Backup NHG: Actions with PBF
- TE-13.1: gRIBI route ADD during Failover
- TE-13.2: gRIBI route DELETE during Failover
- TE-14.1: gRIBI Scaling
- TE-14.2: encap and decap scale
- TE-15.1: gRIBI Compliance
- TE-16.1: basic encapsulation tests
- TE-16.2: encapsulation FRR scenarios
- TE-16.3: encapsulation FRR scenarios
- TE-17.1: VRF selection policy driven TE
- TE-18.1: gRIBI MPLS-in-UDP Encapsulation
- TE-18.3: MPLS in UDP Encapsulation Scale Test
- TE-18.4: ECMP hashing on outer and inner packets with MPLSoUDP encapsulation
- TestID-16.4: gRIBI to BGP Route Redistribution for IPv4
- TR-6.1: Remote Syslog feature config
- TR-6.2: Local logging destinations
- TRANSCEIVER-1.1: Telemetry: 400ZR Chromatic Dispersion(CD) telemetry values streaming
- TRANSCEIVER-1.2: Telemetry: 400ZR_PLUS Chromatic Dispersion(CD) telemetry values streaming
- TRANSCEIVER-3.1: Telemetry: 400ZR Optics firmware version streaming
- TRANSCEIVER-3.2: Telemetry: 400ZR_PLUS Optics firmware version streaming
- TRANSCEIVER-4.1: Telemetry: 400ZR RX input and TX output power telemetry values streaming.
- TRANSCEIVER-4.2: Telemetry: 400ZR_PLUS RX input and TX output power telemetry values streaming.
- TRANSCEIVER-5.1: Configuration: 400ZR channel frequency, output TX launch power and operational mode setting.
- TRANSCEIVER-5.2: Configuration: 400ZR_PLUS channel frequency, output TX launch power and operational mode setting.
- TRANSCEIVER-6.1: Telemetry: 400ZR Optics performance metrics (pm) streaming.
- TRANSCEIVER-6.2: Telemetry: 400ZR_PLUS Optics performance metrics (pm) streaming.
- TRANSCEIVER-7.1: Telemetry: 400ZR Optics inventory info streaming
- TRANSCEIVER-7.2: Telemetry: 400ZR_PLUS Optics inventory info streaming
- TRANSCEIVER-8.1: Telemetry: 400ZR Optics module temperature streaming.
- TRANSCEIVER-8.2: Telemetry: 400ZR_PLUS Optics module temperature streaming.
- TRANSCEIVER-9.1: Telemetry: 400ZR TX laser bias current telemetry values streaming.
- TRANSCEIVER-9.2: Telemetry: 400ZR_PLUS TX laser bias current telemetry values streaming.
- TRANSCEIVER-10.1: Telemetry: 400ZR Optics FEC(Forward Error Correction) Uncorrectable Frames Streaming.
- TRANSCEIVER-10.2: Telemetry: 400ZR_PLUS Optics FEC(Forward Error Correction) Uncorrectable Frames Streaming.
- TRANSCEIVER-11.1: Telemetry: 400ZR Optics logical channels provisioning and related telemetry.
- TRANSCEIVER-11.2: Telemetry: 400ZR_PLUS Optics logical channels provisioning and related telemetry.
- TRANSCEIVER-12.1: Telemetry: 400ZR Transceiver Supply Voltage streaming.
- TRANSCEIVER-12.2: Telemetry: 400ZR_PLUS Transceiver Supply Voltage streaming.
- TRANSCEIVER-13.1: Configuration: 400ZR Transceiver Low Power Mode Setting.
- TRANSCEIVER-13.2: Configuration: 400ZR_PLUS Transceiver Low Power Mode Setting.
- TRANSCEIVER-101: Telemetry: ZR platform OC paths streaming.
- TRANSCEIVER-102: Telemetry: ZR terminal-device OC paths streaming.
- TRANSCEIVER-103: Telemetry: ZR Plus platform OC paths streaming.
- TRANSCEIVER-104: Telemetry: ZR Plus terminal-device OC paths streaming.
- TRANSCEIVER-105: Telemetry: ZR platform OC paths streaming.
- TRANSCEIVER-106: Telemetry: ZR terminal-device OC paths streaming.
- TRANSCEIVER-107: Telemetry: ZR Plus platform OC paths streaming.
- TRANSCEIVER-108: Telemetry: ZR Plus terminal-device OC paths streaming.
- TUN-1.3: Interface based IPv4 GRE Encapsulation
- TUN-1.4: Interface based IPv6 GRE Encapsulation
- TUN-1.6: Tunnel End Point Resize for Ecapsulation - Interface Based GRE Tunnel
- TUN-1.9: GRE inner packet DSCP
- URPF-1.1: uRPF validation from non-default network-instance
- Test Plans