Skip to content

[#10693][chore] AutoDeploy: Add L1 tests from coverage dashboard#11530

Open
marinayanov wants to merge 14 commits intoNVIDIA:mainfrom
nv-auto-deploy:myanov/migrate_tests_to_CI
Open

[#10693][chore] AutoDeploy: Add L1 tests from coverage dashboard#11530
marinayanov wants to merge 14 commits intoNVIDIA:mainfrom
nv-auto-deploy:myanov/migrate_tests_to_CI

Conversation

@marinayanov
Copy link
Collaborator

@marinayanov marinayanov commented Feb 15, 2026

Summary by CodeRabbit

  • Tests
    • Expanded test coverage for H100 and DGX H100 GPU configurations in the test matrix.
    • Added new accuracy validation tests for AutoDeploy model registry functionality.
    • Shifted AutoDeploy testing to post-merge validation stages with updated test suites for multi-GPU configurations.
    • Extended model mappings for improved test data resolution.

Description

Adds AutoDeploy model-registry tests to L1 (post-merge) on H100 so we get stable coverage and catch regressions. Related to #10693.

  • New test class TestModelRegistryAccuracy and test test_autodeploy_from_registry parametrized over 7 models (1-GPU: gemma-3-1b-it; 2-GPU: Llama-3.1-8B, Ministral-8B, Nemotron-Nano-8B; 4-GPU: Codestral-22B, QwQ-32B, Llama-3.3-70B). Tests run model build + inference path; accuracy evaluation is not enabled (accuracy_check is not used in CI).
  • L1 post-merge stages added in jenkins/L0_Test.groovy: H100_PCIe-AutoDeploy-Post-Merge-1 (1-GPU), DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1.
  • Test-db: stage: post_merge + backend: autodeploy blocks in l0_h100.yml and l0_dgx_h100.yml with the corresponding test IDs and GPU-count conditions.

Test Coverage

  • New tests: tests/integration/defs/accuracy/test_llm_api_autodeploy.py::TestModelRegistryAccuracy::test_autodeploy_from_registry[...] for each model param (see test-db entries).
  • CI: Run via stages H100_PCIe-AutoDeploy-Post-Merge-1, DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1 (post-merge only).

PR Checklist

Please review the following before submitting your PR:

  • PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.

  • PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.

  • Test cases are provided for new code paths (see test instructions)

  • Any new dependencies have been scanned for license and vulnerabilities

  • CODEOWNERS updated if ownership changes

  • Documentation updated as needed

  • Update tava architecture diagram if there is a significant design change in PR.

  • The reviewers assigned automatically/manually are appropriate for the PR.

  • [ v] Please check this after reviewing the above items as appropriate for this PR.

GitHub Bot Help

/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...

Provide a user friendly way for developers to interact with a Jenkins server.

Run /bot [-h|--help] to print this help message.

See details below for each supported subcommand.

Details

run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]

Launch build/test pipelines. All previously running jobs will be killed.

--reuse-test (optional)pipeline-id (OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.

--disable-reuse-test (OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.

--disable-fail-fast (OPTIONAL) : Disable fail fast on build/tests/infra failures.

--skip-test (OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.

--stage-list "A10-PyTorch-1, xxx" (OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.

--gpu-type "A30, H100_PCIe" (OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.

--test-backend "pytorch, cpp" (OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.

--only-multi-gpu-test (OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.

--disable-multi-gpu-test (OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.

--add-multi-gpu-test (OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.

--post-merge (OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.

--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" (OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".

--detailed-log (OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.

--debug (OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in the stage-list parameter to access the appropriate container environment. Note: Does NOT update GitHub check status.

For guidance on mapping tests to stage names, see docs/source/reference/ci-overview.md
and the scripts/test_to_stage_mapping.py helper.

kill

kill

Kill all running builds associated with pull request.

skip

skip --comment COMMENT

Skip testing for latest commit on pull request. --comment "Reason for skipping build/test" is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

reuse-pipeline

reuse-pipeline

Reuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.

Expanded the  mapping to include models relevant to the new tests.

Signed-off-by: Marina Yanovskiy <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
…racy check parameter

Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
…method

Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
…tModelRegistryAccuracy class

Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
@marinayanov marinayanov requested review from a team as code owners February 15, 2026 14:01
@marinayanov marinayanov changed the title [#10693][infra] AutoDeploy: Add L1 tests from coverage dashboard [#10693] @coderabbitai title Feb 15, 2026
@marinayanov marinayanov changed the title [#10693] @coderabbitai title [#10693][infra] AutoDeploy: Add L1 tests from coverage dashboard Feb 15, 2026
@marinayanov marinayanov changed the title [#10693][infra] AutoDeploy: Add L1 tests from coverage dashboard [#10693] [infra] AutoDeploy: Add L1 tests from coverage dashboard Feb 15, 2026
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 15, 2026

📝 Walkthrough

Walkthrough

This PR extends H100 AutoDeploy testing by adding model registry-based accuracy validation. It introduces three new test configurations to the L0 test matrix, creates a new test class for registry-backed accuracy checks, updates H100 test list YAML files to shift from pre-merge to post-merge AutoDeploy validation with registry tests, and extends HuggingFace Hub model mappings.

Changes

Cohort / File(s) Summary
Jenkins Test Configuration
jenkins/L0_Test.groovy
Added three new test matrix entries: H100_PCIe-AutoDeploy for non-SLURM x86, and DGX_H100-2_GPUs and DGX_H100-4_GPUs for SLURM x86, each with respective GPU/orchestration configurations.
AutoDeploy Registry Test Suite
tests/integration/defs/accuracy/test_llm_api_autodeploy.py
Introduced TestModelRegistryAccuracy test class extending LlmapiAccuracyTestHarness. Adds registry YAML resolution via _get_registry_yaml_extra(), parameter overrides through deep merging, and test_autodeploy_from_registry() method supporting multiple registry models with optional GPU/memory guards and accuracy validation tasks.
H100 DGX Test Lists
tests/integration/test_lists/test-db/l0_dgx_h100.yml
Replaced pre_merge AutoDeploy blocks with post_merge blocks for 2-GPU and 4-GPU configurations; updated test sets to use autodeploy-from-registry checks and added mirrored post_merge blocks with registry-based test entries.
H100 Single-GPU Test Lists
tests/integration/test_lists/test-db/l0_h100.yml
Added AutoDeploy L1 post_merge stage blocks (appearing twice) with google_gemma-3-1b-it registry test for H100 GPU on Ubuntu with MPI orchestration.
Model Mappings
tests/test_common/llm_data.py
Extended HF_ID_TO_LLM_MODELS_SUBDIR with new HuggingFace Hub to local model directory mappings for fallback resolution.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~22 minutes

🚥 Pre-merge checks | ✅ 2 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (95 files):

⚔️ cpp/kernels/xqa/mha_sm90.cu (content)
⚔️ cpp/tensorrt_llm/batch_manager/kvCacheManagerV2Utils.cu (content)
⚔️ cpp/tensorrt_llm/batch_manager/kvCacheManagerV2Utils.h (content)
⚔️ cpp/tensorrt_llm/common/attentionOp.cpp (content)
⚔️ cpp/tensorrt_llm/executor/cache_transmission/agent_utils/connection.cpp (content)
⚔️ cpp/tensorrt_llm/nanobind/batch_manager/kvCacheManagerV2Utils.cpp (content)
⚔️ cpp/tests/unit_tests/multi_gpu/cacheTransceiverTest.cpp (content)
⚔️ docs/source/commands/trtllm-serve/trtllm-serve.rst (content)
⚔️ docs/source/index.rst (content)
⚔️ docs/source/overview.md (content)
⚔️ jenkins/L0_Test.groovy (content)
⚔️ requirements-dev.txt (content)
⚔️ security_scanning/examples/apps/poetry.lock (content)
⚔️ security_scanning/examples/auto_deploy/poetry.lock (content)
⚔️ security_scanning/examples/draft_target_model/poetry.lock (content)
⚔️ security_scanning/examples/eagle/poetry.lock (content)
⚔️ security_scanning/examples/llm-eval/lm-eval-harness/poetry.lock (content)
⚔️ security_scanning/examples/lookahead/poetry.lock (content)
⚔️ security_scanning/examples/medusa/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/baichuan/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/bloom/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/chatglm-6b/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/chatglm2-6b/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/chatglm3-6b-32k/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/dbrx/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/deepseek_v1/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/deepseek_v2/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/falcon/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/gptj/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/gptneox/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/grok/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/hyperclovax/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/internlm/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/jais/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/mmdit/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/mpt/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/opt/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/skywork/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/smaug/poetry.lock (content)
⚔️ security_scanning/examples/models/contrib/stdit/poetry.lock (content)
⚔️ security_scanning/examples/models/core/commandr/poetry.lock (content)
⚔️ security_scanning/examples/models/core/gemma/poetry.lock (content)
⚔️ security_scanning/examples/models/core/glm-4-9b/poetry.lock (content)
⚔️ security_scanning/examples/models/core/gpt/poetry.lock (content)
⚔️ security_scanning/examples/models/core/llama/poetry.lock (content)
⚔️ security_scanning/examples/models/core/mamba/poetry.lock (content)
⚔️ security_scanning/examples/models/core/mixtral/poetry.lock (content)
⚔️ security_scanning/examples/models/core/mllama/poetry.lock (content)
⚔️ security_scanning/examples/models/core/nemotron/poetry.lock (content)
⚔️ security_scanning/examples/models/core/phi/poetry.lock (content)
⚔️ security_scanning/examples/models/core/qwen/poetry.lock (content)
⚔️ security_scanning/examples/models/core/qwen/pyproject.toml (content)
⚔️ security_scanning/examples/models/core/qwen2audio/poetry.lock (content)
⚔️ security_scanning/examples/models/core/qwenvl/poetry.lock (content)
⚔️ security_scanning/examples/models/core/recurrentgemma/poetry.lock (content)
⚔️ security_scanning/examples/models/core/whisper/poetry.lock (content)
⚔️ security_scanning/examples/ngram/poetry.lock (content)
⚔️ security_scanning/examples/quantization/poetry.lock (content)
⚔️ security_scanning/examples/ray_orchestrator/poetry.lock (content)
⚔️ security_scanning/examples/redrafter/poetry.lock (content)
⚔️ security_scanning/examples/serve/poetry.lock (content)
⚔️ security_scanning/examples/serve/pyproject.toml (content)
⚔️ security_scanning/examples/trtllm-eval/poetry.lock (content)
⚔️ security_scanning/metadata.json (content)
⚔️ security_scanning/poetry.lock (content)
⚔️ security_scanning/pyproject.toml (content)
⚔️ security_scanning/tests/integration/defs/perf/poetry.lock (content)
⚔️ security_scanning/triton_backend/poetry.lock (content)
⚔️ tensorrt_llm/_torch/models/modeling_qwen3vl.py (content)
⚔️ tensorrt_llm/_torch/models/modeling_qwen3vl_moe.py (content)
⚔️ tensorrt_llm/_torch/pyexecutor/model_engine.py (content)
⚔️ tensorrt_llm/_torch/pyexecutor/py_executor.py (content)
⚔️ tensorrt_llm/_torch/pyexecutor/resource_manager.py (content)
⚔️ tensorrt_llm/_torch/speculative/interface.py (content)
⚔️ tensorrt_llm/llmapi/llm_args.py (content)
⚔️ tensorrt_llm/runtime/kv_cache_manager_v2/__init__.pyi (content)
⚔️ tensorrt_llm/runtime/kv_cache_manager_v2/_core/_kv_cache_manager.py (content)
⚔️ tensorrt_llm/runtime/kv_cache_manager_v2/_cuda_virt_mem.py (content)
⚔️ tests/integration/defs/accuracy/test_llm_api_autodeploy.py (content)
⚔️ tests/integration/defs/accuracy/test_llm_api_pytorch.py (content)
⚔️ tests/integration/defs/disaggregated/test_auto_scaling.py (content)
⚔️ tests/integration/defs/disaggregated/test_disaggregated.py (content)
⚔️ tests/integration/defs/disaggregated/test_disaggregated_single_gpu.py (content)
⚔️ tests/integration/defs/perf/disagg/execution/executor.py (content)
⚔️ tests/integration/defs/perf/disagg/utils/common.py (content)
⚔️ tests/integration/defs/stress_test/stress_test.py (content)
⚔️ tests/integration/defs/test_e2e.py (content)
⚔️ tests/integration/test_lists/qa/llm_function_stress.txt (content)
⚔️ tests/integration/test_lists/test-db/l0_b200.yml (content)
⚔️ tests/integration/test_lists/test-db/l0_dgx_h100.yml (content)
⚔️ tests/integration/test_lists/test-db/l0_h100.yml (content)
⚔️ tests/integration/test_lists/waives.txt (content)
⚔️ tests/test_common/llm_data.py (content)
⚔️ tests/unittest/_torch/attention/test_attention_mla.py (content)
⚔️ tests/unittest/llmapi/apps/_test_openai_chat_harmony.py (content)

These conflicts must be resolved before merging into main.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (2 passed)
Check name Status Explanation
Description check ✅ Passed The PR description provides clear explanations of what (new AutoDeploy tests), why (stable coverage and catch regressions), test coverage details, and includes the PR checklist acknowledgment, meeting template requirements.
Title check ✅ Passed The PR title clearly identifies the main objective: adding L1 tests from a coverage dashboard for AutoDeploy functionality on H100 hardware.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
⚔️ Resolve merge conflicts (beta)
  • Auto-commit resolved conflicts to branch myanov/migrate_tests_to_CI
  • Post resolved changes as copyable diffs in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
tests/integration/defs/accuracy/test_llm_api_autodeploy.py (1)

1-2: ⚠️ Potential issue | 🟠 Major

Update the NVIDIA copyright year to 2026.

Line 1 still shows 2025 even though this file is modified in 2026. Please bump the year to reflect the latest meaningful modification.

🧩 Suggested update
-# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per coding guidelines, "All source files must contain an NVIDIA copyright header with the year of latest meaningful modification."

tests/test_common/llm_data.py (1)

1-2: ⚠️ Potential issue | 🟠 Major

Update the NVIDIA copyright year to 2026.

This file was modified; the header still ends at 2025. Please update the range to include 2026.

🧩 Suggested update
-# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
+# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.

As per coding guidelines, "All source files must contain an NVIDIA copyright header with the year of latest meaningful modification."

🤖 Fix all issues with AI agents
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py`:
- Around line 16-26: The imports violate the module-namespace rule by importing
symbols directly; change them to import the modules instead and update usages
accordingly: import pathlib as pathlib (or from pathlib import Path as module
Path? — actually import pathlib and reference pathlib.Path), import
defs.conftest as conftest and use conftest.skip_pre_blackwell, import
test_common.llm_data as llm_data and reference llm_data.hf_id_to_local_model_dir
and llm_data.llm_models_root, import tensorrt_llm._torch.auto_deploy as
auto_deploy and reference auto_deploy.LLM (AutoDeployLLM), import
tensorrt_llm._torch.auto_deploy.utils._config as _config and use
_config.deep_merge_dicts, and import tensorrt_llm.quantization as quantization
and use quantization.QuantAlgo; update all occurrences in the file to use these
qualified names.
- Around line 595-601: The loop currently catches a broad Exception when calling
task.evaluate in the accuracy_check block; replace the broad except Exception
with specific exception types that task.evaluate can actually raise (e.g.,
AssertionError, ValueError, RuntimeError — adjust to the real expected
exceptions) by adding one or more explicit except clauses (e.g., except
AssertionError as e:, except ValueError as e:) that re-raise the same exception
type with the formatted message f"[{task_cls.__name__}] {e}" (using raise
type(e)(...) from None as currently done) so only intended errors are caught and
all other exceptions propagate.
🧹 Nitpick comments (1)
tests/integration/defs/accuracy/test_llm_api_autodeploy.py (1)

499-546: Annotate mutable class attributes with ClassVar.

RUF012 flags mutable class attributes; adding ClassVar helps type-checkers and communicates intent for shared class-level data.

♻️ Suggested update
+import typing
@@
-    BASE_ACCURACY = {"max_seq_len": MAX_SEQ_LEN}
+    BASE_ACCURACY: typing.ClassVar[dict[str, int]] = {"max_seq_len": MAX_SEQ_LEN}
@@
-    MODEL_REGISTRY_ACCURACY_PARAMS = [
+    MODEL_REGISTRY_ACCURACY_PARAMS: typing.ClassVar[list] = [

Comment on lines 16 to 26
from pathlib import Path

import pytest
import torch
import yaml
from defs.conftest import skip_pre_blackwell
from test_common.llm_data import hf_id_to_local_model_dir, llm_models_root

from tensorrt_llm._torch.auto_deploy import LLM as AutoDeployLLM
from tensorrt_llm._torch.auto_deploy.utils._config import deep_merge_dicts
from tensorrt_llm.quantization import QuantAlgo
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align new imports with the module-namespace rule.

Line 16 and Line 25 import classes/functions directly. Please import modules and qualify usage (e.g., pathlib.Path, _config.deep_merge_dicts) to keep namespaces per the repo import guideline.

🧩 Suggested update
-from pathlib import Path
+import pathlib

-from tensorrt_llm._torch.auto_deploy.utils._config import deep_merge_dicts
+from tensorrt_llm._torch.auto_deploy.utils import _config
@@
-        registry_path = (Path(__file__).resolve().parents[4] /
+        registry_path = (pathlib.Path(__file__).resolve().parents[4] /
                          "examples/auto_deploy/model_registry")
@@
-        merged = deep_merge_dicts(self.BASE_ACCURACY, config_overrides)
+        merged = _config.deep_merge_dicts(self.BASE_ACCURACY, config_overrides)

As per coding guidelines, "Python imports must use from package.subpackage import module style; never use from module import Class."

Also applies to: 551-585

🤖 Prompt for AI Agents
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py` around lines 16 -
26, The imports violate the module-namespace rule by importing symbols directly;
change them to import the modules instead and update usages accordingly: import
pathlib as pathlib (or from pathlib import Path as module Path? — actually
import pathlib and reference pathlib.Path), import defs.conftest as conftest and
use conftest.skip_pre_blackwell, import test_common.llm_data as llm_data and
reference llm_data.hf_id_to_local_model_dir and llm_data.llm_models_root, import
tensorrt_llm._torch.auto_deploy as auto_deploy and reference auto_deploy.LLM
(AutoDeployLLM), import tensorrt_llm._torch.auto_deploy.utils._config as _config
and use _config.deep_merge_dicts, and import tensorrt_llm.quantization as
quantization and use quantization.QuantAlgo; update all occurrences in the file
to use these qualified names.

Comment on lines 595 to 601
if accuracy_check:
for task_cls in tasks:
task = task_cls(model_name)
try:
task.evaluate(llm, sampling_params=sampling_params)
except Exception as e:
raise type(e)(f"[{task_cls.__name__}] {e}") from None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Narrow the exception type in the accuracy loop.

Catching Exception is too broad; please catch only the specific exceptions task.evaluate can raise.

🧩 Suggested narrowing (adjust to actual expected exceptions)
-                    except Exception as e:
+                    except (AssertionError, RuntimeError, ValueError) as e:

As per coding guidelines, "Avoid broad exception handling—catch specific exceptions, not bare except: clauses."

🧰 Tools
🪛 Ruff (0.15.0)

[warning] 600-600: Do not catch blind exception: Exception

(BLE001)


[warning] 601-601: Avoid specifying long messages outside the exception class

(TRY003)

🤖 Prompt for AI Agents
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py` around lines 595
- 601, The loop currently catches a broad Exception when calling task.evaluate
in the accuracy_check block; replace the broad except Exception with specific
exception types that task.evaluate can actually raise (e.g., AssertionError,
ValueError, RuntimeError — adjust to the real expected exceptions) by adding one
or more explicit except clauses (e.g., except AssertionError as e:, except
ValueError as e:) that re-raise the same exception type with the formatted
message f"[{task_cls.__name__}] {e}" (using raise type(e)(...) from None as
currently done) so only intended errors are caught and all other exceptions
propagate.

@marinayanov marinayanov changed the title [#10693] [infra] AutoDeploy: Add L1 tests from coverage dashboard [#10693][chore] AutoDeploy: Add L1 tests from coverage dashboard Feb 15, 2026
Comment on lines 500 to 502
MAX_SEQ_LEN = max(MMLU.MAX_INPUT_LEN + MMLU.MAX_OUTPUT_LEN,
GSM8K.MAX_INPUT_LEN + GSM8K.MAX_OUTPUT_LEN)
BASE_ACCURACY = {"max_seq_len": MAX_SEQ_LEN}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be best to set it according to the accuracy tasks we enable.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll remove it in the meantime

Copy link
Collaborator

@galagam galagam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good, added some comments.
We'll need infra devs to chime in on the infra changes.

Comment on lines 508 to 511
{
"max_batch_size": 128,
"attn_backend": "flashinfer",
},
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need hardcoding the config here? Can this be defined in the model_registry yaml for each model instead?
Same for the other models.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll remove those specific configs. We load all config from the yaml_extra files in models.yaml; some of those yamls don't define everything, and we can add or extend configs (in the registry or as overrides) later when we enable accuracy checks or if we hit issues

@galagam galagam changed the title [#10693][chore] AutoDeploy: Add L1 tests from coverage dashboard [#10693][chore] AutoDeploy: Add L1 tests from coverage dashboard Feb 15, 2026
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
@marinayanov
Copy link
Collaborator Author

/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36039 [ run ] triggered by Bot. Commit: fd5169a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36039 [ run ] completed with state FAILURE. Commit: fd5169a
/LLM/main/L0_MergeRequest_PR pipeline #27846 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@marinayanov
Copy link
Collaborator Author

/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36044 [ run ] triggered by Bot. Commit: fd5169a

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36044 [ run ] completed with state SUCCESS. Commit: fd5169a
/LLM/main/L0_MergeRequest_PR pipeline #27852 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

… model registry accuracy tests

Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
@marinayanov
Copy link
Collaborator Author

/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36050 [ run ] triggered by Bot. Commit: fd31309

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36050 [ run ] completed with state SUCCESS. Commit: fd31309
/LLM/main/L0_MergeRequest_PR pipeline #27856 completed with status: 'FAILURE'

⚠️ Action Required:

  • Please check the failed tests and fix your PR
  • If you cannot view the failures, ask the CI triggerer to share details
  • Once fixed, request an NVIDIA team member to trigger CI again

@marinayanov
Copy link
Collaborator Author

/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1"

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36072 [ run ] triggered by Bot. Commit: d05fcac

// "H100_PCIe-TensorRT-Post-Merge-4": ["h100-cr", "l0_h100", 4, 5],
// "H100_PCIe-TensorRT-Post-Merge-5": ["h100-cr", "l0_h100", 5, 5],
"H100_PCIe-FMHA-Post-Merge-1": ["h100-cr", "l0_h100", 1, 1],
"H100_PCIe-AutoDeploy-Post-Merge-1": ["h100-cr", "l0_h100", 1, 1],
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move to the new config like DGX_H100-PyTorch-Post-Merge-2": ["auto:dgx-h100-x1", "l0_h100", 2, 2].

auto_trigger: others
orchestrator: mpi
tests:
- accuracy/test_llm_api_autodeploy.py::TestModelRegistryAccuracy::test_autodeploy_from_registry[meta-llama_Llama-3.1-8B-Instruct-False]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the execution time for these tests, respectively? It's inefficient to add more test stages if related tests only need a few mins or so.

Every test stage has non-negligible setup and tear-down costs.

@tensorrt-cicd
Copy link
Collaborator

PR_Github #36072 [ run ] completed with state SUCCESS. Commit: d05fcac
/LLM/main/L0_MergeRequest_PR pipeline #27870 completed with status: 'SUCCESS'
Pipeline passed with automatic retried tests. Check the rerun report for details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants