Cpp_extension fails to load

I’m trying to use the cpp_extension functionality but I can’t seem to get it to work.

I have a JIT script from my friend, who is able to run it in linux, but whenever I run it on windows, it fails. The error is quite clear, it seems like it can’t find the compiler, but unfortunately I have no idea exactly which compiler it is looking for or how to help it find it.

The script fails on the load statement below:

import torch
from torch.utils.cpp_extension import load
from torch.nn.modules.utils import _triple

# load the PyTorch extension
cudnn_convolution = load(name="cudnn_convolution", sources=["src/reversible_network/cudnn_convolution.cpp"], verbose=True)

With the following message:

C:\Users\Tue\PycharmProjects\Epitopes_segmentation\venv\Scripts\python.exe "C:\Program Files\JetBrains\PyCharm Community Edition 2020.1\plugins\python-ce\helpers\pydev\pydevd.py" --multiproc --qt-support=auto --client --port 51413 --file C:/Users/Tue/PycharmProjects/DistogramPredictor/run.py
pydev debugger: process 10460 is connecting
Connected to pydev debugger (build 201.6668.115)
Using C:\Users\Tue\AppData\Local\Temp\torch_extensions as PyTorch extensions root...
C:\Users\Tue\PycharmProjects\Epitopes_segmentation\venv\lib\site-packages\torch\utils\cpp_extension.py:237: UserWarning: Error checking compiler version for cl: [WinError 2] The system cannot find the file specified
  warnings.warn('Error checking compiler version for {}: {}'.format(compiler, error))
Emitting ninja build file C:\Users\Tue\AppData\Local\Temp\torch_extensions\cudnn_convolution\build.ninja...
INFO: Could not find files for the given pattern(s).
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 783, in exec_module
  File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
  File "C:\Users\Tue\PycharmProjects\DistogramPredictor\src\main.py", line 12, in <module>
    from src.reversible_network.hypernet import HyperNet
  File "C:\Users\Tue\PycharmProjects\DistogramPredictor\src\reversible_network\hypernet.py", line 6, in <module>
    from src.reversible_network.doublesym import DoubleSymLayer3D
  File "C:\Users\Tue\PycharmProjects\DistogramPredictor\src\reversible_network\doublesym.py", line 5, in <module>
    from src.reversible_network.grad import conv3d_weight
  File "C:\Users\Tue\PycharmProjects\DistogramPredictor\src\reversible_network\grad.py", line 6, in <module>
    cudnn_convolution = load(name="cudnn_convolution", sources=["src/reversible_network/cudnn_convolution.cpp"], verbose=True)
  File "C:\Users\Tue\PycharmProjects\Epitopes_segmentation\venv\lib\site-packages\torch\utils\cpp_extension.py", line 888, in load
    return _jit_compile(
  File "C:\Users\Tue\PycharmProjects\Epitopes_segmentation\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1077, in _jit_compile
  File "C:\Users\Tue\PycharmProjects\Epitopes_segmentation\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1171, in _write_ninja_file_and_build_library
  File "C:\Users\Tue\PycharmProjects\Epitopes_segmentation\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1509, in _write_ninja_file_to_build_library
  File "C:\Users\Tue\PycharmProjects\Epitopes_segmentation\venv\lib\site-packages\torch\utils\cpp_extension.py", line 1615, in _write_ninja_file
    cl_paths = subprocess.check_output(['where',
  File "C:\Users\Tue\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 411, in check_output
    return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "C:\Users\Tue\AppData\Local\Programs\Python\Python38\lib\subprocess.py", line 512, in run
    raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command '['where', 'cl']' returned non-zero exit status 1.

I’m running python 3.8 and a pip list gives the following:

(venv) C:\Users\Tue\PycharmProjects\DistogramPredictor>pip list
Package             Version
------------------- ------------
beautifulsoup4      4.9.1
biopython           1.77
blis                0.4.1
bs4                 0.0.1
catalogue           1.0.0
certifi             2020.6.20
chardet             3.0.4
click               7.1.2
cssselect           1.1.0
cycler              0.10.0
cymem               2.0.3
dill                0.3.2
en-core-web-sm      2.3.0
filelock            3.0.12
fire                0.3.1
fr-core-news-sm     2.3.0
future              0.18.2
h5py                2.10.0
hnswlib             0.3.4
idna                2.10
joblib              0.16.0
Keras               2.4.3
Keras-Applications  1.0.8
Keras-Preprocessing 1.1.2
kiwisolver          1.2.0
lmdb                0.98
lxml                4.5.2
matplotlib          3.2.1
msgpack             1.0.0
murmurhash          1.0.2
ninja               1.10.0.post1
nltk                3.5
numpy               1.18.5
packaging           20.4
pandas              1.0.4
parse               1.15.0
Pillow              7.1.2
pip                 20.2.2
plac                1.1.3
preshed             3.0.2
pyarrow             0.17.1
pybind11            2.5.0
pyparsing           2.4.7
pyquery             1.4.1
python-dateutil     2.8.1
pytz                2020.1
pywebcopy           6.3.0
PyYAML              5.3.1
regex               2020.6.8
requests            2.24.0
sacremoses          0.0.43
scipy               1.4.1
seaborn             0.10.1
sentencepiece       0.1.91
seqeval             0.0.12
setuptools          47.2.0
six                 1.15.0
soupsieve           2.0.1
spacy               2.3.0
srsly               1.0.2
termcolor           1.1.0
thinc               7.4.1
tokenizers          0.8.0rc4
torch               1.5.0
torchtext           0.6.0
torchvision         0.6.0
tqdm                4.47.0
urllib3             1.25.9
w3lib               1.22.0
wasabi              0.7.0
wget                3.2

Can anyone point me towards what I’m missing/doing wrong?

1 Like

it looks for cl.exe from MSVC. I would take a look at CONTRIBUTING.md to see how compiling PyTorch itself is set up, you’ll not need the dependencies of PyTorch but the compiler etc. probably is similar to what is discussed there.

I think your suggestion is a good place to start, if I was more familiar with building things in c++. Unfortunately I’m on pretty unfamiliar ground here, so I really have no idea what I’m looking for in there.

Do you think that I need to install MSVC myself in order to get cl.exe or should that already be installed when I run pip install ninja?, in which case it is more a linking problem.

Happily, I’m not a windows user, but from my past experience I would expect you to need to install some version of MSVC yourself.

I have now installed MSVC and tried adding the cl.exe to the path environment, but none of it seems to matter.
It is still not finding it when I run the above. Does anyone know how exactly I give cpp_extension the path to cl.exe?

@peterjc123, do you know?

Find x64 developer command prompt in Start Menu and do anything in that shell.

1 Like

Thank you for the reply, but what you are suggesting doesn’t sound reasonable (If I’m understanding you correctly?). You want me to abandon my PyCharm IDE, and my virtual python environment, and set up everything again through a command prompt and start running my code through that.

What you are suggesting might work, if all I needed was just to run this code once, but I will have no decent way of debugging my code with this approach, and any future development would be a nightmare.

Isn’t there a way to just export cl.exe as a path environment variable and have Cpp_extension find it? or some other workaround?

Actually, we have introduced some changes to activate the environment automatically recently. See https://github.com/pytorch/pytorch/pull/38862. But unluckily, it doesn’t go into v1.5.x. However, you may reuse the same logic to do that using Python.


Thank you, that looks promising!
Any idea how long it would take before it makes it into torch? (weeks?/months?/longer?)
I tried to just implement the changes in that file, but unfortunately it seems that there are quite a few others, branching to other files as well.
For now I will put it on the shelf, and just work with builtin functions for now (which will be about 10 times slower, but since my code isn’t ready for large scale tests anyway that should be fine for now).
But I’m really hoping this will fix it down the road.