Porting simple 0.4 CUDA/C extensions to 1.0

I’ve tried to find a proper migration guide for porting pyTorch C-extensions to pyTorch 1.0+ but the only things I find are confused forum posts and comprehensive C+±extension guides of hundreds of pages for writing advanced ATen integrations… :slight_smile:

I have a supersimple setup, just a few very simple CUDA kernels I need to call from pyTorch and have no interest in bells-and-whistles C++ integration with ATen, autograd or whatever, they worked with 10 rows of C and 10 rows of Python in 0.4 :frowning: So was hoping for someone else that went through this migration before and could suggest a good procedure.

Basically what I’ve done so far is I have a build-script in Python using torch’s create_extension:

from torch.utils.ffi import create_extension

ffi = create_extension(
‘nc’,
headers=[‘custom_c_stuff.h’],
sources=[‘custom_c_stuff.c’],
define_macros=[(‘WITH_CUDA’, None)],
relative_to=file,
with_cuda=True,
extra_objects=[‘custom_cuda_stuff.so’, ],
include_dirs=[osp.join(abs_path, ‘.’)]
)
ffi.build()

and in the custom_stuff.c/h files I do something like this:

#include <THC/THC.h>

int custom_c_function(THCudaTensor *in_tensor, THCudaTensor *out_tensor)
{
call_custom_cuda_stuff_function();
}

What do you think, is this easily portable to 1.0? :slight_smile: Very grateful for hints :slight_smile: