How to get torch working in python when building from source

Apologize for the potential obvious question here. I am familiar with C++ but not python.

I am able to build and install pytorch 1.3.0 branch successfully by standard procedures like cmake .., make -j, sudo make install. I am able to link to pytorch for my C++ inference code and everything works as expected. However, I am having trouble getting pytorch to work in python.

import torch

shows error ImportError: No module named torch

When I build pytorch from source, the cmake option BUILD_PYTHON is set to on, so python related stuff should be built and installed.

I also looked into, and unfortunately the pre-built binaries requires CUDA 10.1 but my environment is currently at CUDA 10.0.

How can I get torch working in python when building from source?

If you want to build PyTorch from source, follow these instructions.

Note that you don’t need a local CUDA installation for the binaries, as they will install cudatoolkit from a conda directly. You would just need the NVIDIA driver on your machine.

Have you tried installing it via

Just do
pip install torch-1.2.0-cp36-cp36m-manylinux1_x86_64.whl
1.2 is torch version and cp36 is python 3.6 and it is for x86_64 architecture so I would go to first link and choose the right python version and architecture and then install it.

I am able to make some progress by python install as indicated in the link. It seems to install some torch related python files locally, import torch now shows different kinds of error, but it seems to be caused by missing packages and I will try to resolve it.

My problem is I need a deb equivalent for python so that I can distribute the build to my other machines. Based on the verbose message of running python install, the first line shows

Building wheel torch-1.3.0a0+de394b6

After some googling, wheel files seems to be the equivalent of deb files for python, which is good news, but the problem is no wheel file is generated. Am I missing something?

Thanks for the pointer. Do you know what is the difference between




what does the extra u means?

I figured out how to get the pre-built binaries for CUDA 10.0.

sudo pip install torch==1.3.0+cu100 torchvision==0.4.1+cu100 -f

This will work for me. Thanks for all the help.

The difference is the way strings are represented: I believe the extra β€œu” signifies UCS4 coding, as opposed to UCS2. The two versions are incompatible at the binary level (i.e., the β€œu” is part of the ABI tag)

1 Like