How to Install NVIDIA CUDA Toolkit on Ubuntu (The Universal Guide)
A step-by-step guide to setting up your environment for AI and Deep Learning on Linux, applicable to most modern Ubuntu versions.
Introduction
To run local LLMs or train neural networks effectively, you need the NVIDIA CUDA Toolkit. While specific version numbers change regularly (11.x, 12.x, 13.x), the installation process remains largely the same.
This guide focuses on a version-agnostic approach. We will use meta-packages and symbolic links so that your configuration remains valid even after you upgrade the toolkit in the future.
Prerequisites: Check your hardware
First, verify that your system physically detects an NVIDIA GPU.
List PCI devices associated with NVIDIA:
xinit@localhost:~$ lspci | grep -i nvidia
Step 1: Get the official repository installer
Since NVIDIA updates their repositories frequently, it is best to get the specific "keyring" package for your OS version directly from their site.
- Go to the official NVIDIA CUDA Downloads page.
- Select Linux -> x86_64 -> Ubuntu -> [Your Version] -> deb (network).
- Copy the first wget command provided there.
Example command (for Ubuntu 24.04):
xinit@localhost:~$ wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
Step 2: Install the repository keyring
This package configures your system's package manager (apt) to trust NVIDIA's servers.
Install the downloaded keyring file:
xinit@localhost:~$ sudo dpkg -i cuda-keyring_*.deb
Step 3: Update package lists
Refresh your package manager to see the newly available NVIDIA software.
Update the apt cache:
xinit@localhost:~$ sudo apt-get update
Step 4: Install CUDA (using the meta-package)
Instead of installing a specific version (like cuda-toolkit-13-0), we will install the generic cuda package. This is a "meta-package" that always points to the latest stable version available in the repository.
Install the latest CUDA Toolkit and Drivers:
xinit@localhost:~$ sudo apt-get -y install cuda
Note: This command installs both the Toolkit and the proprietary Drivers automatically. If you only need the toolkit (e.g., inside a Docker container), use cuda-toolkit instead.
Step 5: Configure environment variables (the "evergreen" way)
By default, CUDA installs into a versioned directory (e.g., /usr/local/cuda-13.0). However, it also creates a symbolic link at /usr/local/cuda that points to the active version. Using this link in your configuration ensures you don't need to update your paths when you upgrade CUDA later.
Add the generic CUDA bin path to your shell configuration:
xinit@localhost:~$ echo 'export PATH=/usr/local/cuda/bin${PATH:+:${PATH}}' >> ~/.bashrc
Add the generic CUDA lib64 path to your library path:
xinit@localhost:~$ echo 'export LD_LIBRARY_PATH=/usr/local/cuda/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}' >> ~/.bashrc
Apply the changes immediately:
xinit@localhost:~$ source ~/.bashrc
Step 6: Verify the installation
Check if the drivers are communicating with the GPU:
xinit@localhost:~$ nvidia-smi
Check if the compiler is reachable and which version was installed:
xinit@localhost:~$ nvcc --version
Conclusion
You now have a fully functional CUDA environment. By using the cuda meta-package and the /usr/local/cuda symbolic link, your setup is cleaner and easier to maintain. You can now proceed to install libraries like PyTorch or TensorFlow, which will utilize your GPU acceleration.