Skip to content

[DOC] Unify CUDA naming#18797

Merged
tlopex merged 1 commit intoapache:mainfrom
ruslo:pr.doc_cuda
Feb 19, 2026
Merged

[DOC] Unify CUDA naming#18797
tlopex merged 1 commit intoapache:mainfrom
ruslo:pr.doc_cuda

Conversation

@ruslo
Copy link
Contributor

@ruslo ruslo commented Feb 19, 2026

Fix CUDA naming in documentation and comments

  • Cuda -> CUDA
  • cuda -> CUDA

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @ruslo, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on enhancing code and documentation consistency by unifying the capitalization of 'CUDA' throughout the codebase. The changes ensure that all references to NVIDIA's parallel computing platform adhere to a single, standardized format, improving readability and reducing potential confusion.

Highlights

  • Consistent CUDA Naming: Standardized the capitalization of 'CUDA' across documentation, comments, and code, changing instances of 'cuda' and 'Cuda' to 'CUDA' for improved consistency.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • cmake/config.cmake
    • Updated comment to use 'CUDA toolkit'.
  • cmake/modules/CUDA.cmake
    • Updated comment to use 'CUDA' when available.
  • cmake/utils/FindCUDA.cmake
    • Updated comment to use 'CUDA path'.
  • docs/install/docker.rst
    • Updated documentation to refer to 'CUDA'.
  • include/tvm/s_tir/meta_schedule/postproc.h
    • Updated comment to refer to 'CUDA device'.
  • include/tvm/tir/function.h
    • Updated comment to refer to 'CUDA' attribute.
  • python/tvm/contrib/cutlass/gemm_profiler.py
    • Updated error message to 'Got bad CUDA status'.
  • python/tvm/contrib/msc/framework/tensorrt/codegen/sources.py
    • Updated error message to 'CUDA failure'.
  • python/tvm/contrib/nvcc.py
    • Updated docstrings to refer to 'CUDA code', 'CUDA architecture', and 'CUDA path'.
    • Updated error message to 'CUDA compiler'.
    • Updated utility function docstring to 'find CUDA path'.
    • Updated utility function docstring to 'get CUDA version'.
    • Updated error message to 'Cannot read CUDA version file'.
  • python/tvm/contrib/xcode.py
    • Updated docstring to refer to 'CUDA code'.
  • python/tvm/runtime/device.py
    • Updated docstring to refer to 'CUDA/Vulkan' drivers.
    • Updated docstring to refer to 'CUDA, Metal, ROCm, OpenCL, and Vulkan' devices.
    • Updated docstring to refer to 'CUDA, ROCm, and Vulkan' devices.
    • Updated docstring to refer to 'CUDA, ROCm, OpenCL, and Vulkan'.
    • Updated docstring to refer to 'CUDA/OpenCL/Vulkan' and 'CUDA, ROCm, OpenCL, and Vulkan'.
    • Updated docstring to refer to 'CUDA, ROCm, OpenCL, and Vulkan'.
    • Updated docstring to refer to 'CUDA, ROCm, and OpenCL'.
    • Updated docstring to refer to 'CUDA, ROCm, and OpenCL'.
    • Updated docstring to refer to 'CUDA, ROCm, OpenCL, and Vulkan'.
    • Updated docstring to refer to 'CUDA_VERSION for CUDA' and 'CUDA, ROCm, OpenCL, and Vulkan'.
  • python/tvm/runtime/executable.py
    • Updated docstring to refer to 'CUDA source files'.
  • python/tvm/s_tir/meta_schedule/schedule/cuda/layout_transform.py
    • Updated module docstring to 'layout_transform scheduling rule for CUDA'.
  • python/tvm/testing/plugin.py
    • Updated marker description to 'mark a test as requiring CUDA'.
  • python/tvm/testing/utils.py
    • Updated docstring to refer to 'CUDA support'.
    • Updated docstring to refer to 'CUDA support'.
  • python/tvm/topi/gpu/sort.py
    • Updated comment to 'Define and launch the CUDA kernel'.
  • src/runtime/contrib/cublas/cublas_utils.h
    • Updated error message to 'Unsupported CUDA type'.
  • src/runtime/contrib/nvshmem/memory_allocator.cc
    • Updated error message to 'allocate CUDA device memory space'.
  • src/runtime/contrib/papi/papi.cc
    • Updated comment to refer to 'CPU, CUDA and ROCm'.
  • src/runtime/cuda/cuda_module.h
    • Updated comments to 'create a CUDA module' and 'CUDA source file'.
  • src/s_tir/meta_schedule/postproc/rewrite_unbound_block.cc
    • Updated comment to 'max number of threadblocks in the CUDA device'.
  • src/s_tir/meta_schedule/schedule_rule/auto_bind.cc
    • Updated comment to 'max number of threadblocks in the CUDA device'.
  • src/target/opt/build_cuda_off.cc
    • Updated comment to 'build CUDA is switched to off'.
  • src/target/opt/build_cuda_on.cc
    • Updated comments to 'Build CUDA modules' and 'requires CUDA to be available'.
  • src/target/source/codegen_cuda.cc
    • Updated error message to 'Unsupported calling convention for CUDA codegen'.
  • src/target/source/codegen_cuda.h
    • Updated comment to 'Utility to generate CUDA code'.
  • src/target/source/literal/cuda_half_t.h
    • Updated comment to 'half_t (fp16) definition for CUDA codegen'.
  • src/target/source/literal/cuda_int8_t.h
    • Updated comment to 'Extra int8 intrisic for CUDA codegen'.
  • tests/python/contrib/test_msc/test_runner.py
    • Updated docstring to 'Test runner for tvm on CUDA'.
    • Updated docstring to 'Test runner for torch on CUDA'.
  • tests/python/relax/test_relax_operators.py
    • Updated comment to 'running on CUDA device'.
  • tests/python/tir-transform/test_tir_transform_device_kernel_launch.py
    • Updated comments to 'number of CUDA threads to use'.
  • tests/scripts/task_show_node_info.sh
    • Updated output message to 'CUDA not found'.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request unifies the naming of CUDA across the codebase in documentation and comments. The changes are mostly correct and improve consistency. I've found one place where a comment was incorrect and the PR, while fixing the casing, missed the opportunity to fix the content of the comment. I've left a suggestion for it.

Fix CUDA naming in documentation and comments

- Cuda -> CUDA
- cuda -> CUDA
@mshr-h
Copy link
Contributor

mshr-h commented Feb 19, 2026

Please consider consolidating documentation enhancements into a single PR instead of submitting multiple PRs.

@tlopex tlopex merged commit 52e4547 into apache:main Feb 19, 2026
7 checks passed
@ruslo ruslo deleted the pr.doc_cuda branch February 19, 2026 13:50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments