Multiple Images in Single Container Docker setup for Nvidia Cuda

Abhijit Jadhav
4 min readMay 8, 2022

Many peoples have the misconception that docker supports only one image at a time But in reality that's not the case, we can run multiple images in a Singe container also.

When we will require a multi-stage Docker container setup?

Let's consider a scenario where you have developed a deep learning Django application with extensively uses Nvidia CUDA technology and now you want to Dockerise the application so you can deploy the application in the cloud with CUDA enabled. One such application is Deepfake detection Django app

Now basically there are two approaches to dockerize this kind of application.

The first approach is to pull a ubuntu image from the docker hub and install python and Nvidia CUDA into the ubuntu image. Yep, this approach sounds good but there are many problems with this approach. The main problem is you have to run many commands and installing CUDA in a container is never a simple task you have to go through the multiple dependency issues and it is not also guaranteed that the application will run as expected.

The second approach is that we can pull the latestnvidia/cuda image from the docker hub and then connect to the container and install python Django and other stuff in the container and build the image. This approach is better than the first approach but yes still you have to do some manual work.

Now let's take a look at the best approach, in this we will add the latest nvidia/cuda image and the required python image in the same Dockerfile and copy the Django app and then run the Django application from there.

Now most of the developers might be thinking we are using two images wouldn't it spin up two containers for us, The answer is No, that's the magic of the Docker Multi-stage setup which will spin up a single container adding two images to it. Wow, this looks good, isn’t it?

But the problem here is it becomes difficult to run this kind of setup via docker-compose, but you can always build an image through Dockerfile and spin up a container.

Now let's talk a bit about CUDA dockerisation, another problem with CUDA images is that you have to specify the CUDA container to use your GPU explicitly else it would not use your GPU and the CUDA-based application will not run. We need to specify --gpu allto use your all GPU cores for that container and yes you can set the number of GPUs to use as per your need. Check official documentation for more.

It becomes difficult to launch a multi-image setup from the docker-compose file due to which I recommend using the Dockerfile approach.

So let's take a look at a sample docker file.

#pull the nvidia cuda GPU docker image
FROM nvidia/cuda
#pull python 3.6.8 docker image
FROM python:3.6.8
WORKDIR /app
COPY ./requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app

In the above docker file we are pulling nvidia/cuda and python:3.6.8 docker image and copying our Django app to the container and installing the requirements.txt . Now we will build the image.

After building the image many developers will think let's spin up a container using the below command.

docker run -p 8000:8000 djangoapplication_web python3 manage.py runserver 0.0.0.0:8000

After that container spin up correctly and then we will see that we get a CUDA error saying unable to find the cuda drivers This happens because the container was never told to access the GPU of the Host machine, we need to instruct the container to access the GPU of the host machine, so now the below command will work fine.

docker run --rm --gpus all -p 8000:8000 djangoapplication_web python3 manage.py runserver 0.0.0.0:8000

Now the Container will be able to access the GPU 😊.

You can check my step-by-step blog to dockerize a Django application.

I hope I am making the life of many developers easy and don't forget to follow me.

--

--

Abhijit Jadhav

Full Stack Java Developer and AI Enthusiast loves to build scalable application with latest tech stack