The CPU-only version of pytorch used be < 200 MB (see e.g. heroku - Where do I get a CPU-only version of PyTorch? - Stack Overflow ). Now it is > 1 GB.
This seems to mainly be due to the inclusion of dnnl.lib
and mkldnn.lib
(Windows). Are these needed for inference? It seems I can just delete them.
Anyone have any good ideas for further reducing the file size of the torch library?
Thanks!
ptrblck
February 24, 2021, 5:10am
2
The CPU conda binaries seems still be be <200MB for the stable release as well as the nitghtlies . Could you show a log where the >1GB CPU binary is installed?
The wheel file is only 184.2 MB, but unpacked (i.e. the folder in site-packages) is 936 MB. But this is perhaps to be expeced?
(venv) λ pip install torch==1.7.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
Looking in links: https://download.pytorch.org/whl/torch_stable.html
Collecting torch==1.7.1+cpu
Using cached https://download.pytorch.org/whl/cpu/torch-1.7.1%2Bcpu-cp39-cp39-win_amd64.whl (184.2 MB)
Collecting numpy
Using cached numpy-1.20.1-cp39-cp39-win_amd64.whl (13.7 MB)
Collecting typing-extensions
Using cached typing_extensions-3.7.4.3-py3-none-any.whl (22 kB)
Installing collected packages: numpy, typing-extensions, torch
Successfully installed numpy-1.20.1 torch-1.7.1+cpu typing-extensions-3.7.4.3
ptrblck
February 24, 2021, 7:28am
4
Ah OK, thanks for the explanation.
It seems you’ve already narrowed it down to the CPU acceleration libs. I’m not completely sure, if dnnl
and mkldnn
is packed into the pip
wheels, you you might give it a try.