Erros when computing second order gradients on gpu

Here is a piece of test code:

import math
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.autograd import Variable
from torch.nn.parameter import Parameter

class ConvNet(nn.Module):
    def __init__(self):
        super(ConvNet, self).__init__()
        self.conv1_weight = Parameter(torch.randn(10,1,3,3))
        self.conv1_bias = Parameter(torch.randn(10))

    def forward(self, x):
        out = x
        out = F.conv2d(out, self.conv1_weight, bias=self.conv1_bias)
        return out

if __name__ == '__main__':
    from torch.autograd import grad
    model = ConvNet()
    model.cuda()
    x = Variable(torch.randn(1,1,28,28)).cuda()
    print(x.size())
    y = model(x)
    print(y.size())
    loss = torch.mean(y.pow(2))
    g = grad(loss, model.parameters(), create_graph=True, retain_graph=True)[0]
    gg = grad(g[0,0,0,0], model.parameters(), retain_graph=True)[0] ## bug

If the following conditions meet, the second order gradient cannot be computed:

  1. you want to compute second order grad (not first order grad)
  2. you network contains convolution (fc layer functions normally)
  3. Computation on gpu (on cpu, everything was fine)

And the error is:

RuntimeError: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.

Add .contiguous() won’t fix anything.

I will be very grateful if someone could offer some help !