I have to deploy a model on windows and trouble to install some dependencies, so it is possible by Docker Image(No Cuda installed)?
Yes you can deploy on windows using Docker. This is what makes docker so powerful. The method is same, pull the image and make a container to run it.
I doubt that you can use GPU+CUDA+cudnn in Docker on Windows. You need “NVIDIA Container Runtime for Docker” which allows you to use the hosts GPU in your container. But on Windows Linux runs in a VM, which in turn has no access to the GPU. See also here: https://github.com/NVIDIA/nvidia-docker/wiki/Frequently-Asked-Questions#is-microsoft-windows-supported.
Running deep learning tasks in a VM can be very slow, because the Advanced Vector Extensions (AVX) commands are not available in your virtual host (at least on my Mac), which slows down the learning rate a lot.
So even if you have nvidia driver installed on windows, you still will not be able to use GPU in docker? If I understand correctly, the instructions would become different when a linux host would try to communicate with a windows driver.
No, you are not able to access the GPU from inside the container in Windows. This is, because the container runs inside a a stripped down Linux host which runs inside a virtual machine. And the virtual machine doesn’t have access to the GPU. You may experiment with VMWare vSphere as a docker-machine driver and try using Passthrough mechanism.
The situation is different when you run Linux on bare metal. Then you can access the GPU from inside the container when you run the instance with the nvidia-docker runtime.
I didn’t knew things are so complicated on Windows. But thanks for the learning experience.