How to include native functions from ATen in cpp/CUDA extension?

I’ve been looking through the ATen library and want to be be able to call functions like cudnn_convolution or cudnn_convolution_backward_weight but they don’t appear to get included with

#include <ATen/ATen.h>

or

#include <torch/torch.h>

These functions (and others) are in the native directory. Another one I’m trying to use is from GridSampler.cu which uses enumerators defined elsewhere in the native package.

What do I include in my *.cpp file to access these functions? Are these functions exposed? I am writing a custom layer as described in the mixed CPP/CUDA tutorial.

You shouldn’t use the native functions directly. Please use the at::xxx or tensor.xxx bindings. For example, at::grid_sampler(input, grid).

Hmm is that enum not exposed? If so you should submit a feature request.

Thanks for the reply! Could you please link me to the file/doc that lays out the exposed functions? For example, the only thing I could find on the GitHub repo for grid sampling was what I’ve linked to. Where can I see that at::grid_sampler(input, grid) is part of the library? Or at::grid_sampler_backward() for that matter?

Also, are there conv_2d calls I can make, or conv_2d_grad_inputs(), conv_2d_grad_weights(), and conv_2d_grad_bias()?

PyTorch is a great library, but it can be quite frustrating to search through the source code with all the code gen and abstraction. I’ve spent days just to get to the point I’m at now. Is it going to be cleaned up (or at least better documented) as part of the 1.0 release?