Compile JIT from source to support CUDA

Could you tell the bug in the following codes?

pytorch/torch/csrc/jit/codegen/cuda/manager.cpp

namespace {
std::unique_ptr makePWKernelSupport(
const at::ArrayRef& inputs) {
auto req_ptr = std::make_unique();
for (const auto& input : inputs) {
req_ptr->dims_.push_back(input.isTensor() ? input.toTensor().dim() : -1);
}
return req_ptr;
}

C++ compiler told me there is an error resulted from copying unique_ptr?
I am not a C++ expert, so I cannot tell where the wrong code is and how to fix it.

What kind of error did you see?
Could you post the complete error message, please?

Thanks for your reply.
To compile cuda-support pytorch from source, I encountered some problems.

First, by following the latest guide in github README, the error occurred.

…/torch/csrc/jit/codegen/cuda/manager.cpp:75:58: required from here
/usr/include/c++/5/bits/stl_construct.h:75:7: error: use of deleted function ‘constexpr std::pair<_T1, _T2>::pair(const std::pair<_T1, _T2>&) [with _T1 = std::unique_ptrtorch::jit::fuser::cuda::KernelArgsReq; _T2 = torch::jit::fuser::cuda::CudaKernel]’
{ ::new(static_cast<void*>(__p)) _T1(std::forward<_Args>(__args)…); }

I modified /torch/csrc/jit/codegen/cuda/manager.cpp:75
from

kernel_cache_.insert({kernel_id, CudaKernelCache()});

to

kernel_cache_.insert(std::make_pair(std::move(kernel_id), CudaKernelCache()));

After modification, the compilation can go on, but a new error comes out.

…/test/cpp/jit/test_gpu.cpp:1633:47: error: converting to ‘std::tuple<at::Tensor ()(const at::Tensor&), torch::jit::fuser::UnaryOpType, std::__cxx11::basic_string<char, std::char_traits, std::allocator > >’ from initializer list would use explicit constructor ‘constexpr std::tuple< >::tuple(_UElements&& …) [with _UElements = {at::Tensor (&)(const at::Tensor&), torch::jit::fuser::UnaryOpType, const char (&)[4]}; = void; _Elements = {at::Tensor ()(const at::Tensor&), torch::jit::fuser::UnaryOpType, std::__cxx11::basic_string<char, std::char_traits, std::allocator >}]’
{at::trunc, UnaryOpType::Trunc, “trunc”}};

I noticed std::__cxx11::basic_string, so I guess cuda-supported pytorch should be compiled with C++11 instead of C++14. I am about to verify my guess.

Do you have any suggestions?

You would need a C++14 compiler as given in the build instructions:

If you are installing from source, you will need a C++14 compiler.

Now I am using C++14, but errors comes out.
So what do you think about the second error?

…/test/cpp/jit/test_gpu.cpp:1633:47: error: converting to ‘std::tuple<at::Tensor ( )(const at::Tensor&), torch::jit::fuser::UnaryOpType, std::__cxx11::basic_string<char, std::char_traits, std::allocator > >’ from initializer list would use explicit constructor ‘constexpr std::tuple< >::tuple(_UElements&& …) [with _UElements = {at::Tensor (&)(const at::Tensor&), torch::jit::fuser::UnaryOpType, const char (&)[4]}; = void; _Elements = {at::Tensor ( )(const at::Tensor&), torch::jit::fuser::UnaryOpType, std::__cxx11::basic_string<char, std::char_traits, std::allocator >}]’
{at::trunc, UnaryOpType::Trunc, “trunc”}};

Is there a solution?
Thank you!

I have passed the compilation and the CUDA can be detected correctly.
For the second error, I commented all the related tuple initialization in /home/lim/Desktop/GithubRepo/pytorch/test/cpp/jit/test_gpu.cpp.