Large CPU file size

The CPU-only version of pytorch used be < 200 MB (see e.g. heroku - Where do I get a CPU-only version of PyTorch? - Stack Overflow). Now it is > 1 GB.

This seems to mainly be due to the inclusion of dnnl.lib and mkldnn.lib (Windows). Are these needed for inference? It seems I can just delete them.

Anyone have any good ideas for further reducing the file size of the torch library?


The CPU conda binaries seems still be be <200MB for the stable release as well as the nitghtlies. Could you show a log where the >1GB CPU binary is installed?

The wheel file is only 184.2 MB, but unpacked (i.e. the folder in site-packages) is 936 MB. But this is perhaps to be expeced?

(venv) λ pip install torch==1.7.1+cpu -f
Looking in links:
Collecting torch==1.7.1+cpu
  Using cached (184.2 MB)
Collecting numpy
  Using cached numpy-1.20.1-cp39-cp39-win_amd64.whl (13.7 MB)
Collecting typing-extensions
  Using cached typing_extensions- (22 kB)
Installing collected packages: numpy, typing-extensions, torch
Successfully installed numpy-1.20.1 torch-1.7.1+cpu typing-extensions-

Ah OK, thanks for the explanation.
It seems you’ve already narrowed it down to the CPU acceleration libs. I’m not completely sure, if dnnl and mkldnn is packed into the pip wheels, you you might give it a try.