My own autograd function backward always set grad to None

Hi,

For testing, I wrote a extremely simple loss function that sums all output value to be the loss, here is the code (input is 1x1x10x10 tensor for simple test):

class SomeLoss(Function):

    def forward(self, input):
        total_loss = torch.DoubleTensor(1)
        total_loss[0] = torch.sum(input)        
        self.save_for_backward(input)
        return total_loss

    def backward(self,grad_output):
        input, = self.saved_tensors
        grad_input = torch.DoubleTensor(input.size())
        grad_input = grad_input.zero_() + 1
        return grad_input

However, when I used torch.autograd.gradcheck function, it always return False:

my_creterion = SomeLoss()
valid = torch.autograd.gradcheck(my_creterion, (input,)) # valid is always false

Then I dig into gradcheck code and I found that in function get_analytical_jacobian, after doing output.backward(grad_output), input.grad is always None instead of a Tensor of all 1, which set Jacobian to be zero afterwards. Does anyone know what’s wrong with my code?

you forgot the chain rule.

gradinput = gradoutput * local_gradient

The better implementation look like this:

class SomeLoss(Function):

    def forward(self, input):
        self.input_size = input.size()
        return input.new([input.sum()])

    def backward(self,grad_output):
        return grad_output.expand(self.input_size)

there are many examples of Function in _functions , and a good tutorial in docs

3 Likes