|
| 1 | +# Overview |
| 2 | + |
| 3 | +The meta-nvidia layer provides support for NVIDIA graphics drivers and related components for Yocto Project-based distributions. This layer includes recipes for building the NVIDIA binary graphics driver, the GL Vendor-Neutral Dispatch library (libglvnd), and modifications to the mesa package to ensure compatibility with NVIDIA's proprietary drivers. |
| 4 | + |
| 5 | + |
| 6 | +# Contents |
| 7 | + |
| 8 | +- Configuration: The layer.conf file contains the necessary configurations for the layer, including BBPATH, BBFILES, and other essential settings. |
| 9 | +- Custom Licenses: The custom-licenses directory contains custom license files that may be required by the recipes in this layer. |
| 10 | +- Recipes: |
| 11 | + - libglvnd: Provides the GL Vendor-Neutral Dispatch library. |
| 12 | + - mesa: Contains modifications to the mesa package to ensure compatibility with NVIDIA's proprietary drivers. |
| 13 | + - nvidia: Contains recipes for building NVIDIA's binary graphics driver and related components. |
| 14 | + |
| 15 | +# Key Features |
| 16 | + |
| 17 | +- libglvnd: The GL Vendor-Neutral Dispatch library allows multiple OpenGL implementations to coexist on the same system. |
| 18 | +- NVIDIA Binary Graphics Driver: Provides support for NVIDIA GPUs, enabling hardware-accelerated graphics, CUDA support, and other NVIDIA-specific features. |
| 19 | +- Mesa Modifications: Ensures that the open-source Mesa graphics library can coexist with NVIDIA's proprietary drivers. |
| 20 | + |
| 21 | +# Usage |
| 22 | + |
| 23 | +To use the meta-nvidia layer in your Yocto Project build: |
| 24 | + |
| 25 | +- Clone the meta-nvidia repository to your local machine. |
| 26 | +- Add the path to the meta-nvidia layer to your bblayers.conf file. |
| 27 | +- Include the desired recipes in your image or build them individually using bitbake. |
| 28 | + |
| 29 | +To use this layer, include it in your bblayers.conf and add the |
| 30 | +following to your BSP, distro, or local config: |
| 31 | + |
| 32 | +```bash |
| 33 | +DISTRO_FEATURES:append = " x11 opengl" |
| 34 | +DISTRO_FEATURES:remove = " wayland" |
| 35 | +IMAGE_INSTALL:append = " libxshmfence cmake" |
| 36 | +IMAGE_INSTALL:append = " packagegroup-core-buildessential" |
| 37 | +IMAGE_INSTALL:append = " acpid" |
| 38 | +IMAGE_INSTALL:append = " nvidia" |
| 39 | +PREFERRED_PROVIDER_virtual/libgl = "libglvnd" |
| 40 | +PREFERRED_PROVIDER_virtual/libgles1 = "libglvnd" |
| 41 | +PREFERRED_PROVIDER_virtual/libgles3 = "libglvnd" |
| 42 | +PREFERRED_PROVIDER_virtual/egl = "libglvnd" |
| 43 | +PREFERRED_PROVIDER_virtual/libgl-native = "mesa-native" |
| 44 | +PREFERRED_PROVIDER_virtual/nativesdk-libgl = "nativesdk-mesa-gl" |
| 45 | +PREFERRED_PROVIDER_virtual/mesa = "libglvnd" |
| 46 | +KERNEL_MODULE_AUTOLOAD:append = " nvidia nvidia-drm nvidia-modeset nvidia-uvm" |
| 47 | +XSERVER = " \ |
| 48 | + ${XSERVER_X86_BASE} \ |
| 49 | + ${XSERVER_X86_EXT} \ |
| 50 | + ${XSERVER_X86_MODESETTING} \ |
| 51 | + nvidia" |
| 52 | +``` |
| 53 | + |
| 54 | +# Testing nvidia-container-toolkit and GPU Workloads |
| 55 | + |
| 56 | +- For testing nvidia-container-toolkit inside the container execute the following commands: |
| 57 | + |
| 58 | +```bash |
| 59 | +sudo ctr images pull docker.io/nvidia/cuda:12.0.0-base-ubuntu20.04 |
| 60 | + |
| 61 | +sudo ctr run --rm --gpus 0 --runtime io.containerd.runc.v1 --privileged docker.io/nvidia/cuda:12.0.0-base-ubuntu20.04 nvidia-smi nvidia-smi |
| 62 | +``` |
| 63 | + |
| 64 | +- For testing nvidia-container-toolkit with k3s |
| 65 | + |
| 66 | +```bash |
| 67 | +cat <<EOF | kubectl create -f - |
| 68 | +apiVersion: v1 |
| 69 | +kind: Pod |
| 70 | +metadata: |
| 71 | + name: gpu |
| 72 | +spec: |
| 73 | + restartPolicy: Never |
| 74 | + runtimeClassName: nvidia |
| 75 | + containers: |
| 76 | + - name: gpu |
| 77 | + image: "nvidia/cuda:12.0.0-base-ubuntu20.04" |
| 78 | + command: [ "/bin/bash", "-c", "--" ] |
| 79 | + args: [ "while true; do sleep 30; done;" ] |
| 80 | + resources: |
| 81 | + limits: |
| 82 | + nvidia.com/gpu: 0 |
| 83 | +EOF |
| 84 | + |
| 85 | +kubectl exec -it gpu -- nvidia-smi |
| 86 | +``` |
| 87 | + |
| 88 | +- For testing a sample GPU workload in k3s |
| 89 | + |
| 90 | +```bash |
| 91 | +cat << EOF | kubectl create -f - |
| 92 | +apiVersion: v1 |
| 93 | +kind: Pod |
| 94 | +metadata: |
| 95 | + name: cuda-vectoradd |
| 96 | +spec: |
| 97 | + restartPolicy: OnFailure |
| 98 | + runtimeClassName: nvidia |
| 99 | + containers: |
| 100 | + - name: cuda-vectoradd |
| 101 | + image: "nvidia/samples:vectoradd-cuda11.2.1" |
| 102 | + resources: |
| 103 | + limits: |
| 104 | + nvidia.com/gpu: 0 |
| 105 | +EOF |
| 106 | +``` |
| 107 | + |
| 108 | +# Dependencies |
| 109 | + |
| 110 | +This layer depends on following layers: |
| 111 | + |
| 112 | + URI: git://git.yoctoproject.org/poky |
| 113 | + layers: meta |
| 114 | + branch: mickledore |
| 115 | + |
| 116 | + URI: git://git.yoctoproject.org/meta-openembedded |
| 117 | + layers: meta-oe |
| 118 | + branch: mickledore |
| 119 | + |
| 120 | + URI: git://git.yoctoproject.org/meta-virtualization |
| 121 | + layers: meta-virtualization |
| 122 | + branch: mickledore |
| 123 | + |
| 124 | + |
| 125 | +# Compatibility |
| 126 | + |
| 127 | +The meta-nvidia layer is ONLY compatible with the "mickledore" series of the Yocto Project. |
| 128 | + |
| 129 | +# License |
| 130 | + |
| 131 | +The recipes in this layer are licensed under various licenses, including MIT, BSD, and NVIDIA's proprietary license. Please refer to the individual recipe files and the custom-licenses directory for detailed licensing information. |
| 132 | +Contributing |
| 133 | + |
| 134 | +Contributions to the meta-nvidia layer are welcome. Please ensure that any changes are tested with the target hardware and do not introduce regressions. |
| 135 | + |
| 136 | +# Disclaimer |
| 137 | + |
| 138 | +This layer is not officially supported by NVIDIA Corporation. It is based on meta-nvidia from deprecated [OakLabsInc](https://github.com/OakLabsInc/meta-nvidia). Always refer to NVIDIA's official documentation and support channels for information related to NVIDIA products. |
0 commit comments