CUDA dependency make Docker Image bloated

According to Docker hub , the Docker image size increased from 2.76GB (1.12.1) to 5.29 GB (1.13.0).

I also checked the package size installed by pip, there seems not much change in wheel size (776.3MB to 890.1MB). However, we installed these oversized packages:

nvidia-cublas-cu11==11.10.3.66 (317.1 MB)
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-runtime-cu11==11.7.99 (557.1 MB)
nvidia-cudnn-cu11==8.5.0.96

So what do these do? And could we remove these dependencies?

These “oversized” packages were previously shipped in the PyTorch wheel directly, so the overall size shouldn’t change. Which exact container are you using which increased the size?

For all those packages, I install from PyPi directly

These are official Docker image from PyTorch size compare

Thanks for the follow-up. Where did you find:

as these packages are using in the CUDA 11.7 wheels while your containers are using 11.3 and 11.6.
pip list | grep nvidia also doesn’t return anything in the 11.6 container, so are you mixing up a few containers?

Sorry for such a confusing message.

There are two separate problems actually.

  1. Docker image size increases a lot from PyTorch 1.12 to 1.13 (as shown in the above image).
  2. The python environment installed torch 1.13 is much larger than 1.12. I ran the pip install in a separate virtual environment to separately compare these two versions. To get the above package:
# Create virtual environment
python3.7 -m venv venv
source venv/bin/activate

# Should be install torch 1.13
pip install torch torchvision

# listing package
pip freeze | grep nvidia

I list these packages out because I think that they are the root problem that makes PyTorch become too big.