How did you execute the container?
Note that you would need the nvidia docker runtime to be able to use GPUs inside the container.
Older docker versions used:
I solved my problem and forgot to take a look at this question, the problem was that it is not possible to check the avaiability of the GPUs during building of an image as I was doing in the image above.
Once I tried to start the image and use command torch.cuda.is_avaiable() it returned True and I figured out the problem was not the installed packages but the way I was checking if they were working.