Setting up your Gretel environment (1/4) - Video
Video description
In part 1 of this video series on setting up your Gretel environment, Alex walks through how to set up a VM and get started with deep learning and Gretel.
More Videos
Read the blog post
Transcription
(00:05): Hey, this is Alex from Gretel. Today, we're going to walk through a complete end-to-end example of setting up a deep learning box for training synthetic models inside your own environment. Best place to start here is with the Gretel doc sites. We just go to docs.gretel.ai. We're going to follow the instructions in the environment setup guide.
(00:23): So first thing you'll want to do is set up a machine somewhere that's connected to an NVIDIA or deep learning compatible GPU. Options are GCP, Azure, AWS. Your choice. Today, we'll be setting up a box that I configured running a base Ubuntu 18.04 instance inside GCP. Go ahead and SSH in. I'll see if I can make the font here a little bigger.
(00:50): Great. We have our base instance here. The first thing we want to do is set up Docker. So we'll use Docker to run our deep learning containers in the background. This script here will go ahead and set up Docker to run on our instance.
(01:32): Great, Docker's up and running. Next thing we'll do is make sure we have the user added to the group, set up correctly. And we'll make sure that the user has correct privileges to run. To confirm whether Docker's running next, we're going to run Docker PS. In Ubuntu you need a run sudo first. So, sudo Docker. Here, we can see that Docker service is running correctly, even though we don't have any containers running at the moment.
(02:06): Ready for the next step. So since this is a little different, we're configuring our deep learning box, we're going to skip past the API setup instructions for now. And we're going to go down to the GPU plus Docker configuration. So here we're going to add the extra drivers necessary for Docker to be able to use the machine's GPU. We've made a simple setup script here, and this is going to install TensorFlow. This will install the correct appropriate matching NVIDIA driver versions in CUDA as well. This process takes a while, so go ahead and have a cup of coffee and we'll come on back.
(02:41): Looks like the driver installation completed. What we see it doing now is downloading an image from Docker that verifies the ability to interact with the GPU. So the driver's been set up correctly. You can see running the NVIDIA SMI command here inside of the container shows us it does detect and able to talk to the Tesla T4 GPU we have. You can also run the same command from your local instance. So, they're the same, just run NVIDIA-SMI. And now we can be confident that the drivers are set up and running correctly. Jump into the next video on an end-to-end training on a local box here using the Gretel SDK.