Failure to pass gradient check but the operation is reportedly correct

Problem

I am trying to understand how the gradient reversal layer works.

I found a successful implementation as reported here. He claimed that by using the gradient reversal operation, he could reproduce results in the research papers. The operation is essentially the following (minor differences may arise from API changes)

class GradientReverse(torch.autograd.Function):
    @staticmethod
    def forward(ctx, x):
        return x.view_as(x)

    @staticmethod
    def backward(ctx, grad_output):
        return grad_output.neg()

However, when I tried to make sure this operation is correct

test_input = autograd.Variable(torch.randn((3, 4), dtype=torch.float64), requires_grad=True)
flag = torch.autograd.gradcheck(GradientReverse.apply, test_input, eps=1e-3)

an error occurred

RuntimeError: Jacobian mismatch for output 0 with respect to input 0,

I am not sure what mistakes did I make when doing the gradient check.

Could someone help me? Thank you in advance!

Hi,

Well gradient check checks whether the gradients computed in the backward match the derivative of your forward function.
In your case, your backward definitely don’t compute the gradients of the function that corresponds to the forward. So I would expect the gradcheck to fail.

PS: To avoid weird autograd behavior, change your forward() method to return x.clone(). :wink:

If I understand the role of autograd.gradcheck() properly, it only performs backpropagation in traditional sense.

This means there is no way for my function to pass the gradient check since adding minus sign to gradient is unconventional.

gradcheck checks for “true gradients”.
For your function, the “true gradient” would be 1. But you deliberately set it to -1. So there is no way indeed it can pass the gradcheck.

1 Like