Compile CUDA kernels with newer Pytorch version

Hi,

I am planning to use GitHub - pierre-wilmot/NeuralTextureSynthesis: Code for "Stable and Controllable Neural Texture Synthesis and Style Transfer Using Histogram Losses"

However, this seems to be done with previous Pytorch and CUDA versions. Currently I am using Pytorch 1.6.0, Python 3.7.9, Windows 10, CUDA 10.1/10.2, but got the following errors:

Style Transfer
Traceback (most recent call last):
  File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1515, in _run_ninja_build
    env=env)
  File "D:\Anaconda3\envs\tf2\lib\subprocess.py", line 512, in run
    output=stdout, stderr=stderr)
subprocess.CalledProcessError: Command '['ninja', '-v']' returned non-zero exit status 1.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "main.py", line 17, in <module>
    cpp = torch.utils.cpp_extension.load(name="histogram_cpp", sources=["histogram.cpp", "histogram.cu"])
  File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 974, in load
    keep_intermediates=keep_intermediates)
  File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1179, in _jit_compile
    with_cuda=with_cuda)
  File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1279, in _write_ninja_file_and_build_library
    error_prefix="Error building extension '{}'".format(name))
  File "D:\Anaconda3\envs\tf2\lib\site-packages\torch\utils\cpp_extension.py", line 1529, in _run_ninja_build
    raise RuntimeError(message)
RuntimeError: Error building extension 'histogram_cpp': [1/3] cl /showIncludes -DTORCH_EXTENSION_NAME=histogram_cpp -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include -ID:\Anaconda3\envs\tf2\lib\sit
e-packages\torch\include\torch\csrc\api\include -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\TH -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include"
 -ID:\Anaconda3\envs\tf2\Include -D_GLIBCXX_USE_CXX11_ABI=0 /MD /wd4819 /wd4251 /wd4244 /wd4267 /wd4275 /wd4018 /wd4190 /EHsc -c .../histogram.cpp /Fohistogram.o
Microsoft (R) C/C++ Optimizing Compiler Version 19.28.29913 for x64
Copyright (C) Microsoft Corporation.  All rights reserved.

[2/3] C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_witho
ut_dll_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xco
mpiler /MD -DTORCH_EXTENSION_NAME=histogram_cpp -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Anaconda3\envs\
tf2\lib\site-packages\torch\include\TH -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -ID:\Anaconda3\envs\tf2\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA
_NO_HALF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=sm_75 -c .../histogram.cu -o histogram.cuda.o
FAILED: histogram.cuda.o
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin\nvcc -Xcudafe --diag_suppress=dll_interface_conflict_dllexport_assumed -Xcudafe --diag_suppress=dll_interface_conflict_none_assumed -Xcudafe --diag_suppress=field_without_dll
_interface -Xcudafe --diag_suppress=base_class_has_different_dll_interface -Xcompiler /EHsc -Xcompiler /wd4190 -Xcompiler /wd4018 -Xcompiler /wd4275 -Xcompiler /wd4267 -Xcompiler /wd4244 -Xcompiler /wd4251 -Xcompiler /wd4819 -Xcompiler
 /MD -DTORCH_EXTENSION_NAME=histogram_cpp -DTORCH_API_INCLUDE_EXTENSION_H -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\torch\csrc\api\include -ID:\Anaconda3\envs\tf2\li
b\site-packages\torch\include\TH -ID:\Anaconda3\envs\tf2\lib\site-packages\torch\include\THC "-IC:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\include" -ID:\Anaconda3\envs\tf2\Include -D_GLIBCXX_USE_CXX11_ABI=0 -D__CUDA_NO_HA
LF_OPERATORS__ -D__CUDA_NO_HALF_CONVERSIONS__ -D__CUDA_NO_HALF2_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_75,code=sm_75 -c .../histogram.cu -o histogram.cuda.o
D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\c10/util/ThreadLocalDebugInfo.h(12): warning: modifier is ignored on an enum specifier

D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\ATen/record_function.h(18): warning: modifier is ignored on an enum specifier

D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(483): error: a member with an in-class initializer must be const

D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(496): error: a member with an in-class initializer must be const

D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(510): error: a member with an in-class initializer must be const

D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/jit/api/module.h(523): error: a member with an in-class initializer must be const

D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/autograd/profiler.h(97): warning: modifier is ignored on an enum specifier

D:/Anaconda3/envs/tf2/lib/site-packages/torch/include\torch/csrc/autograd/profiler.h(126): warning: modifier is ignored on an enum specifier

4 errors detected in the compilation of "C:/Users/.../AppData/Local/Temp/tmpxft_00004a38_00000000-10_histogram.cpp1.ii".
histogram.cu
ninja: build stopped: subcommand failed.

Is there any idea how to modify it? I am also wondering if there is any general way to adapt the older version CUDA kernel to newer ones? Thanks in advance.

The error seems to be related to this issue so you might want to install a newer PyTorch release or cherry-pick the fix into your branch and rebuild PyTorch from source, if 1.6.0 is needed.