Install TensorFlow with CUDA, cDNN, and GPU Support in 4 Easy Steps

Credit: shoushu via iStockPhoto
Credit: shoushu via iStockPhoto

Set up a cutting edge environment for deep learning with TensorFlow 2.4 and GPU support.

Getting Started

Latest update: 5/7/21 - Added automatic conda environment creation, no interaction required for conda installation.

Setting up a deep learning environment with TensorFlow and GPU support can be a time consuming and frustrating process. In this post, we will walk through a few simple steps to setting up TensorFlow 2.4 (the latest major version) with GPU acceleration enabled on a cloud VM running Debian or Ubuntu.


Tested with NVIDIA Tesla v100, P4, and K80 deep learning GPUs on AWS, GCP, and Azure.  Any NVIDIA CUDA compatible GPU should work.


  • Debian-compatible OS (Ubuntu 18.04 recommended)
  • Python 3.8
  • Anaconda package manager

Step 1 — Install The Conda Package Manager

Step 2 — Create Your Conda Environment

In this step, we will set up our Conda virtual environment and let Conda handle the heavy lifting installing Python 3.8.

Step 3 — Install NVIDIA Developer Libraries

This is where many setups and installations get tricky. Each version of TensorFlow is compiled to use a specific version of the cuDNN and CUDA developer libraries.

For anyone wondering, CUDA is NVIDIA’s toolset for GPU accelerated code, and cuDNN is described by NVIDIA as “a GPU-accelerated library of primitives for deep neural networks. There is no Conda install script available for TensorFlow 2.4 yet, so we will install the libraries ourselves using the TensorFlow team’s instructions.

Step 4 — Confirm Your GPU Setup

TensorFlow 2.4 introduces a new way to check to confirm whether your GPUs are available.

Let’s see it in action…

Good to go! Let us know if you have any questions in the comments, Twitter, or our Slack channel