Install TensorFlow and PyTorch with CUDA, cUDNN, and GPU Support in 3 Easy Steps
Getting Started
Latest update:Â 3/6/2023 - Added support for PyTorch, updated Tensorflow version, and more recent Ubuntu version
Setting up a deep learning environment with GPU support can be a major pain. In this post, we'll walk through setting up the latest versions of Ubuntu, PyTorch, TensorFlow, and Docker with GPU support to make getting started easier than ever. Prefer video? Check out the live walk-through of this post on Gretel’s Youtube channel.
Hardware
Tested with NVIDIA Tesla T4 and RTX 3090 GPUs on GCP, AWS, and Azure. Any NVIDIA CUDA compatible GPU should work.
Software
- Ubuntu 22.04 LTS
- Python 3.9
- Anaconda package manager
Step 1 — Install NVIDIA CUDA Drivers
These are the baseline drivers that your operating system needs to drive the GPU. NVIDIA recommends using Ubuntu’s package manager to install, but you can install drivers from .run files as well.Â
Now that you have installed the drivers, reboot your system.
Log back in and validate that the drivers installed correctly by running NVIDIA’s command line utility.
Step 2 — Set up TensorFlow and PyTorch with GPU support
Install the Anaconda package manager. Navigate to Anaconda | Anaconda Distribution and download the x86 installer for Linux, or use the command below.
Sign out and sign back in via SSH or close and re-open your terminal window. Now we’ll set up virtual environments for TensorFlow and PyTorch.
PyTorch
We’ll start with PyTorch because it’s way less complicated ;-), following PyTorch’s official instructions.Â
TensorFlow
 Now for TensorFlow support. We’ll follow the official instructions, and create a dedicated conda environment for TensorFlow to make sure that the library requirements don't have any conflicts.Â
Sign out and sign back in via SSH or close and re-open your terminal window. Reactivate your conda session.Â
You’ve done it!
Step 3 — Set up Docker with GPU support (optional)
Now that we have the NVIDIA drivers installed, we’ll install Docker, which will let you run GPU-accelerated containers in your environment. Both NVIDIA and TensorFlow state that containers are the easiest way to run GPU-accelerated machine learning applications.
Sign out and sign back in via SSH or close and re-open your terminal window. Now confirm docker is running.
Now, we’ll enable Docker containers to run with GPU acceleration, following NVIDIA’s guide, or just run the commands below. Â
Set up package repository
Update the package manager
Validate that you can run Docker containers with GPU acceleration
Woot! You’re good to go running GPU-accelerated ML containers on your workstation.Â
Let us know if you have any questions in our Discord Community!