Migrating PyTorch 0.3 code that uses C++ API to PyTorch 1.0

Hi. I have some old (=0.3.1) PyTorch code with these calls inside forward or backward of some subclass of torch.autograd.Function:

info = torch._C._cudnn_convolution_full_forward(
    input, weight, bias, output, padding, stride, 
    dilation, groups, benchmark, deterministic)

torch._C._cudnn_convolution_backward_data(
    output, grad_input, weight, info, benchmark, deterministic)

torch._C._cudnn_convolution_backward_filter(
    output, input, grad_weight, info, benchmark, deterministic)

torch._C._cudnn_convolution_backward_bias(
    output, grad_bias, info)

These internal functions can be found, for example, in a historical torch/csrc/cudnn/Conv.cpp. While in PyTorch 1.0, the closest functions I can find are inside aten/src/ATen/native/cudnn/Conv.cpp (for example, the raw_* functions, although raw_cudnn_convolution_backward_bias_out is missing)

But first these functions don’t seem to be inside the _C module (i.e., torch/csrc) so I am not sure whether I can call them from Python. Secondly, the signature has changed: in PyTorch 1.0, the argument info seems to have disappeared and I am not sure whether it is safe to just ignore the info argument. So how should I do the migration? Thank you.

(The reason I am looking for the raw_* functions is that the old code is doing some kind of “in-place” convolution. For example, in backward I have

# for PyTorch 0.3
def backward(ctx, grad_output):
    input, weight, bias = ctx.saved_tensors
    info = ctx.info
    grad_output = grad_output.data.contiguous()
    grad_bias = bias.clone()
    t._C._cudnn_convolution_backward_bias(grad_output, grad_bias, info)
    # do BP for other variables and return

)

1 Like