[#10693][chore] AutoDeploy: Add L1 tests from coverage dashboard#11530
[#10693][chore] AutoDeploy: Add L1 tests from coverage dashboard#11530marinayanov wants to merge 14 commits intoNVIDIA:mainfrom
Conversation
Expanded the mapping to include models relevant to the new tests. Signed-off-by: Marina Yanovskiy <256585945+marinayanov@users.noreply.github.com> Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
…racy check parameter Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
…method Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
…tModelRegistryAccuracy class Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
📝 WalkthroughWalkthroughThis PR extends H100 AutoDeploy testing by adding model registry-based accuracy validation. It introduces three new test configurations to the L0 test matrix, creates a new test class for registry-backed accuracy checks, updates H100 test list YAML files to shift from pre-merge to post-merge AutoDeploy validation with registry tests, and extends HuggingFace Hub model mappings. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
tests/integration/defs/accuracy/test_llm_api_autodeploy.py (1)
1-2:⚠️ Potential issue | 🟠 MajorUpdate the NVIDIA copyright year to 2026.
Line 1 still shows 2025 even though this file is modified in 2026. Please bump the year to reflect the latest meaningful modification.
🧩 Suggested update
-# SPDX-FileCopyrightText: Copyright (c) 2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.As per coding guidelines, "All source files must contain an NVIDIA copyright header with the year of latest meaningful modification."
tests/test_common/llm_data.py (1)
1-2:⚠️ Potential issue | 🟠 MajorUpdate the NVIDIA copyright year to 2026.
This file was modified; the header still ends at 2025. Please update the range to include 2026.
🧩 Suggested update
-# SPDX-FileCopyrightText: Copyright (c) 2022-2025 NVIDIA CORPORATION & AFFILIATES. All rights reserved. +# SPDX-FileCopyrightText: Copyright (c) 2022-2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.As per coding guidelines, "All source files must contain an NVIDIA copyright header with the year of latest meaningful modification."
🤖 Fix all issues with AI agents
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py`:
- Around line 16-26: The imports violate the module-namespace rule by importing
symbols directly; change them to import the modules instead and update usages
accordingly: import pathlib as pathlib (or from pathlib import Path as module
Path? — actually import pathlib and reference pathlib.Path), import
defs.conftest as conftest and use conftest.skip_pre_blackwell, import
test_common.llm_data as llm_data and reference llm_data.hf_id_to_local_model_dir
and llm_data.llm_models_root, import tensorrt_llm._torch.auto_deploy as
auto_deploy and reference auto_deploy.LLM (AutoDeployLLM), import
tensorrt_llm._torch.auto_deploy.utils._config as _config and use
_config.deep_merge_dicts, and import tensorrt_llm.quantization as quantization
and use quantization.QuantAlgo; update all occurrences in the file to use these
qualified names.
- Around line 595-601: The loop currently catches a broad Exception when calling
task.evaluate in the accuracy_check block; replace the broad except Exception
with specific exception types that task.evaluate can actually raise (e.g.,
AssertionError, ValueError, RuntimeError — adjust to the real expected
exceptions) by adding one or more explicit except clauses (e.g., except
AssertionError as e:, except ValueError as e:) that re-raise the same exception
type with the formatted message f"[{task_cls.__name__}] {e}" (using raise
type(e)(...) from None as currently done) so only intended errors are caught and
all other exceptions propagate.
🧹 Nitpick comments (1)
tests/integration/defs/accuracy/test_llm_api_autodeploy.py (1)
499-546: Annotate mutable class attributes withClassVar.RUF012 flags mutable class attributes; adding
ClassVarhelps type-checkers and communicates intent for shared class-level data.♻️ Suggested update
+import typing @@ - BASE_ACCURACY = {"max_seq_len": MAX_SEQ_LEN} + BASE_ACCURACY: typing.ClassVar[dict[str, int]] = {"max_seq_len": MAX_SEQ_LEN} @@ - MODEL_REGISTRY_ACCURACY_PARAMS = [ + MODEL_REGISTRY_ACCURACY_PARAMS: typing.ClassVar[list] = [
| from pathlib import Path | ||
|
|
||
| import pytest | ||
| import torch | ||
| import yaml | ||
| from defs.conftest import skip_pre_blackwell | ||
| from test_common.llm_data import hf_id_to_local_model_dir, llm_models_root | ||
|
|
||
| from tensorrt_llm._torch.auto_deploy import LLM as AutoDeployLLM | ||
| from tensorrt_llm._torch.auto_deploy.utils._config import deep_merge_dicts | ||
| from tensorrt_llm.quantization import QuantAlgo |
There was a problem hiding this comment.
Align new imports with the module-namespace rule.
Line 16 and Line 25 import classes/functions directly. Please import modules and qualify usage (e.g., pathlib.Path, _config.deep_merge_dicts) to keep namespaces per the repo import guideline.
🧩 Suggested update
-from pathlib import Path
+import pathlib
-from tensorrt_llm._torch.auto_deploy.utils._config import deep_merge_dicts
+from tensorrt_llm._torch.auto_deploy.utils import _config
@@
- registry_path = (Path(__file__).resolve().parents[4] /
+ registry_path = (pathlib.Path(__file__).resolve().parents[4] /
"examples/auto_deploy/model_registry")
@@
- merged = deep_merge_dicts(self.BASE_ACCURACY, config_overrides)
+ merged = _config.deep_merge_dicts(self.BASE_ACCURACY, config_overrides)As per coding guidelines, "Python imports must use from package.subpackage import module style; never use from module import Class."
Also applies to: 551-585
🤖 Prompt for AI Agents
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py` around lines 16 -
26, The imports violate the module-namespace rule by importing symbols directly;
change them to import the modules instead and update usages accordingly: import
pathlib as pathlib (or from pathlib import Path as module Path? — actually
import pathlib and reference pathlib.Path), import defs.conftest as conftest and
use conftest.skip_pre_blackwell, import test_common.llm_data as llm_data and
reference llm_data.hf_id_to_local_model_dir and llm_data.llm_models_root, import
tensorrt_llm._torch.auto_deploy as auto_deploy and reference auto_deploy.LLM
(AutoDeployLLM), import tensorrt_llm._torch.auto_deploy.utils._config as _config
and use _config.deep_merge_dicts, and import tensorrt_llm.quantization as
quantization and use quantization.QuantAlgo; update all occurrences in the file
to use these qualified names.
| if accuracy_check: | ||
| for task_cls in tasks: | ||
| task = task_cls(model_name) | ||
| try: | ||
| task.evaluate(llm, sampling_params=sampling_params) | ||
| except Exception as e: | ||
| raise type(e)(f"[{task_cls.__name__}] {e}") from None |
There was a problem hiding this comment.
Narrow the exception type in the accuracy loop.
Catching Exception is too broad; please catch only the specific exceptions task.evaluate can raise.
🧩 Suggested narrowing (adjust to actual expected exceptions)
- except Exception as e:
+ except (AssertionError, RuntimeError, ValueError) as e:As per coding guidelines, "Avoid broad exception handling—catch specific exceptions, not bare except: clauses."
🧰 Tools
🪛 Ruff (0.15.0)
[warning] 600-600: Do not catch blind exception: Exception
(BLE001)
[warning] 601-601: Avoid specifying long messages outside the exception class
(TRY003)
🤖 Prompt for AI Agents
In `@tests/integration/defs/accuracy/test_llm_api_autodeploy.py` around lines 595
- 601, The loop currently catches a broad Exception when calling task.evaluate
in the accuracy_check block; replace the broad except Exception with specific
exception types that task.evaluate can actually raise (e.g., AssertionError,
ValueError, RuntimeError — adjust to the real expected exceptions) by adding one
or more explicit except clauses (e.g., except AssertionError as e:, except
ValueError as e:) that re-raise the same exception type with the formatted
message f"[{task_cls.__name__}] {e}" (using raise type(e)(...) from None as
currently done) so only intended errors are caught and all other exceptions
propagate.
| MAX_SEQ_LEN = max(MMLU.MAX_INPUT_LEN + MMLU.MAX_OUTPUT_LEN, | ||
| GSM8K.MAX_INPUT_LEN + GSM8K.MAX_OUTPUT_LEN) | ||
| BASE_ACCURACY = {"max_seq_len": MAX_SEQ_LEN} |
There was a problem hiding this comment.
Might be best to set it according to the accuracy tasks we enable.
There was a problem hiding this comment.
I'll remove it in the meantime
galagam
left a comment
There was a problem hiding this comment.
Overall looks good, added some comments.
We'll need infra devs to chime in on the infra changes.
| { | ||
| "max_batch_size": 128, | ||
| "attn_backend": "flashinfer", | ||
| }, |
There was a problem hiding this comment.
Why do we need hardcoding the config here? Can this be defined in the model_registry yaml for each model instead?
Same for the other models.
There was a problem hiding this comment.
I'll remove those specific configs. We load all config from the yaml_extra files in models.yaml; some of those yamls don't define everything, and we can add or extend configs (in the registry or as overrides) later when we enable accuracy checks or if we hit issues
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
|
/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1" |
|
PR_Github #36039 [ run ] triggered by Bot. Commit: |
|
PR_Github #36039 [ run ] completed with state
|
|
/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1" |
|
PR_Github #36044 [ run ] triggered by Bot. Commit: |
|
PR_Github #36044 [ run ] completed with state
|
… model registry accuracy tests Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
Signed-off-by: marinayanov <256585945+marinayanov@users.noreply.github.com>
|
/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1" |
|
PR_Github #36050 [ run ] triggered by Bot. Commit: |
|
PR_Github #36050 [ run ] completed with state
|
|
/bot run --extra-stage "DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1, DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1, H100_PCIe-AutoDeploy-Post-Merge-1" |
|
PR_Github #36072 [ run ] triggered by Bot. Commit: |
| // "H100_PCIe-TensorRT-Post-Merge-4": ["h100-cr", "l0_h100", 4, 5], | ||
| // "H100_PCIe-TensorRT-Post-Merge-5": ["h100-cr", "l0_h100", 5, 5], | ||
| "H100_PCIe-FMHA-Post-Merge-1": ["h100-cr", "l0_h100", 1, 1], | ||
| "H100_PCIe-AutoDeploy-Post-Merge-1": ["h100-cr", "l0_h100", 1, 1], |
There was a problem hiding this comment.
Move to the new config like DGX_H100-PyTorch-Post-Merge-2": ["auto:dgx-h100-x1", "l0_h100", 2, 2].
| auto_trigger: others | ||
| orchestrator: mpi | ||
| tests: | ||
| - accuracy/test_llm_api_autodeploy.py::TestModelRegistryAccuracy::test_autodeploy_from_registry[meta-llama_Llama-3.1-8B-Instruct-False] |
There was a problem hiding this comment.
What are the execution time for these tests, respectively? It's inefficient to add more test stages if related tests only need a few mins or so.
Every test stage has non-negligible setup and tear-down costs.
|
PR_Github #36072 [ run ] completed with state |
Summary by CodeRabbit
Description
Adds AutoDeploy model-registry tests to L1 (post-merge) on H100 so we get stable coverage and catch regressions. Related to #10693.
TestModelRegistryAccuracyand testtest_autodeploy_from_registryparametrized over 7 models (1-GPU: gemma-3-1b-it; 2-GPU: Llama-3.1-8B, Ministral-8B, Nemotron-Nano-8B; 4-GPU: Codestral-22B, QwQ-32B, Llama-3.3-70B). Tests run model build + inference path; accuracy evaluation is not enabled (accuracy_checkis not used in CI).jenkins/L0_Test.groovy:H100_PCIe-AutoDeploy-Post-Merge-1(1-GPU),DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1,DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1.stage: post_merge+backend: autodeployblocks inl0_h100.ymlandl0_dgx_h100.ymlwith the corresponding test IDs and GPU-count conditions.Test Coverage
tests/integration/defs/accuracy/test_llm_api_autodeploy.py::TestModelRegistryAccuracy::test_autodeploy_from_registry[...]for each model param (see test-db entries).H100_PCIe-AutoDeploy-Post-Merge-1,DGX_H100-2_GPUs-AutoDeploy-Post-Merge-1,DGX_H100-4_GPUs-AutoDeploy-Post-Merge-1(post-merge only).PR Checklist
Please review the following before submitting your PR:
PR description clearly explains what and why. If using CodeRabbit's summary, please make sure it makes sense.
PR Follows TRT-LLM CODING GUIDELINES to the best of your knowledge.
Test cases are provided for new code paths (see test instructions)
Any new dependencies have been scanned for license and vulnerabilities
CODEOWNERS updated if ownership changes
Documentation updated as needed
Update tava architecture diagram if there is a significant design change in PR.
The reviewers assigned automatically/manually are appropriate for the PR.
[ v] Please check this after reviewing the above items as appropriate for this PR.
GitHub Bot Help
/bot [-h] ['run', 'kill', 'skip', 'reuse-pipeline'] ...Provide a user friendly way for developers to interact with a Jenkins server.
Run
/bot [-h|--help]to print this help message.See details below for each supported subcommand.
Details
run [--reuse-test (optional)pipeline-id --disable-fail-fast --skip-test --stage-list "A10-PyTorch-1, xxx" --gpu-type "A30, H100_PCIe" --test-backend "pytorch, cpp" --add-multi-gpu-test --only-multi-gpu-test --disable-multi-gpu-test --post-merge --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx" --detailed-log --debug(experimental)]Launch build/test pipelines. All previously running jobs will be killed.
--reuse-test (optional)pipeline-id(OPTIONAL) : Allow the new pipeline to reuse build artifacts and skip successful test stages from a specified pipeline or the last pipeline if no pipeline-id is indicated. If the Git commit ID has changed, this option will be always ignored. The DEFAULT behavior of the bot is to reuse build artifacts and successful test results from the last pipeline.--disable-reuse-test(OPTIONAL) : Explicitly prevent the pipeline from reusing build artifacts and skipping successful test stages from a previous pipeline. Ensure that all builds and tests are run regardless of previous successes.--disable-fail-fast(OPTIONAL) : Disable fail fast on build/tests/infra failures.--skip-test(OPTIONAL) : Skip all test stages, but still run build stages, package stages and sanity check stages. Note: Does NOT update GitHub check status.--stage-list "A10-PyTorch-1, xxx"(OPTIONAL) : Only run the specified test stages. Examples: "A10-PyTorch-1, xxx". Note: Does NOT update GitHub check status.--gpu-type "A30, H100_PCIe"(OPTIONAL) : Only run the test stages on the specified GPU types. Examples: "A30, H100_PCIe". Note: Does NOT update GitHub check status.--test-backend "pytorch, cpp"(OPTIONAL) : Skip test stages which don't match the specified backends. Only support [pytorch, cpp, tensorrt, triton]. Examples: "pytorch, cpp" (does not run test stages with tensorrt or triton backend). Note: Does NOT update GitHub pipeline status.--only-multi-gpu-test(OPTIONAL) : Only run the multi-GPU tests. Note: Does NOT update GitHub check status.--disable-multi-gpu-test(OPTIONAL) : Disable the multi-GPU tests. Note: Does NOT update GitHub check status.--add-multi-gpu-test(OPTIONAL) : Force run the multi-GPU tests in addition to running L0 pre-merge pipeline.--post-merge(OPTIONAL) : Run the L0 post-merge pipeline instead of the ordinary L0 pre-merge pipeline.--extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx"(OPTIONAL) : Run the ordinary L0 pre-merge pipeline and specified test stages. Examples: --extra-stage "H100_PCIe-TensorRT-Post-Merge-1, xxx".--detailed-log(OPTIONAL) : Enable flushing out all logs to the Jenkins console. This will significantly increase the log volume and may slow down the job.--debug(OPTIONAL) : Experimental feature. Enable access to the CI container for debugging purpose. Note: Specify exactly one stage in thestage-listparameter to access the appropriate container environment. Note: Does NOT update GitHub check status.For guidance on mapping tests to stage names, see
docs/source/reference/ci-overview.mdand the
scripts/test_to_stage_mapping.pyhelper.kill
killKill all running builds associated with pull request.
skip
skip --comment COMMENTSkip testing for latest commit on pull request.
--comment "Reason for skipping build/test"is required. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.reuse-pipeline
reuse-pipelineReuse a previous pipeline to validate current commit. This action will also kill all currently running builds associated with the pull request. IMPORTANT NOTE: This is dangerous since lack of user care and validation can cause top of tree to break.