pytorch worked fine when I installed with either anaconda or pip.
However, when I tried to build pytorch from the source (python setup.py install), I got the following error messages. (I am using python 2.7.13, gcc 4.8.5, cuda 8.0)
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/homes/3/kimjook/pytorch -I/homes/3/kimjook/pytorch/torch/csrc -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/TH -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THNN -I/u/drspeech/opt/anaconda2/lib/python2.7/site-packages/numpy/core/include -I/usr/local/cuda/include -I/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THCUNN -I/u/drspeech/opt/anaconda2/include/python2.7 -c torch/csrc/cuda/Storage.cpp -o build/temp.linux-x86_64-2.7/torch/csrc/cuda/Storage.o -D_THP_CORE -std=c++11 -Wno-write-strings -DWITH_NUMPY -DWITH_CUDA -DCUDA_LIB_PATH=/usr/local/cuda/lib64
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++ [enabled by default]
torch/csrc/nn/THNN_generic.cpp: In function ‘void torch::nn::Abs_updateOutput(thpp::Tensor*, thpp::Tensor*)’:
torch/csrc/nn/THNN_generic.cpp:45:14: error: ‘THCudaHalfTensor’ was not declared in this scope
(THCudaHalfTensor*)input->cdata(),
^
In file included from /homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/../storages/THCStorage.hpp:11:0,
from /homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/THCTensor.hpp:22,
from torch/csrc/DynamicTypes.cpp:11:
/homes/3/kimjook/pytorch/torch/lib/tmp_install/include/THPP/tensors/../storages/../TraitsCuda.hpp:9:20: error: ‘half’ was not declared in this scope
struct type_traits<half> {
^
It seems that the compiler cannot correctly recognize “half” precision related data structures or classes. How would I reslove this problem? (I don’t need half precision operations so if possible it’s fine for me to disable it.)
Thanks!