Thanks for your reply.
To compile cuda-support pytorch from source, I encountered some problems.
First, by following the latest guide in github README, the error occurred.
…/torch/csrc/jit/codegen/cuda/manager.cpp:75:58: required from here
/usr/include/c++/5/bits/stl_construct.h:75:7: error: use of deleted function ‘constexpr std::pair<_T1, _T2>::pair(const std::pair<_T1, _T2>&) [with _T1 = std::unique_ptrtorch::jit::fuser::cuda::KernelArgsReq; _T2 = torch::jit::fuser::cuda::CudaKernel]’
{ ::new(static_cast<void*>(__p)) _T1(std::forward<_Args>(__args)…); }
I modified /torch/csrc/jit/codegen/cuda/manager.cpp:75
from
kernel_cache_.insert({kernel_id, CudaKernelCache()});
to
kernel_cache_.insert(std::make_pair(std::move(kernel_id), CudaKernelCache()));
After modification, the compilation can go on, but a new error comes out.
…/test/cpp/jit/test_gpu.cpp:1633:47: error: converting to ‘std::tuple<at::Tensor ()(const at::Tensor&), torch::jit::fuser::UnaryOpType, std::__cxx11::basic_string<char, std::char_traits, std::allocator > >’ from initializer list would use explicit constructor ‘constexpr std::tuple< >::tuple(_UElements&& …) [with _UElements = {at::Tensor (&)(const at::Tensor&), torch::jit::fuser::UnaryOpType, const char (&)[4]}; = void; _Elements = {at::Tensor ()(const at::Tensor&), torch::jit::fuser::UnaryOpType, std::__cxx11::basic_string<char, std::char_traits, std::allocator >}]’
{at::trunc, UnaryOpType::Trunc, “trunc”}};
I noticed std::__cxx11::basic_string
, so I guess cuda-supported pytorch should be compiled with C++11 instead of C++14. I am about to verify my guess.
Do you have any suggestions?