## Problem

I am trying to understand how the gradient reversal layer works.

I found a successful implementation as reported here. He claimed that by using the gradient reversal operation, he could reproduce results in the research papers. The operation is essentially the following (minor differences may arise from API changes)

```
class GradientReverse(torch.autograd.Function):
@staticmethod
def forward(ctx, x):
return x.view_as(x)
@staticmethod
def backward(ctx, grad_output):
return grad_output.neg()
```

However, when I tried to make sure this operation is correct

```
test_input = autograd.Variable(torch.randn((3, 4), dtype=torch.float64), requires_grad=True)
flag = torch.autograd.gradcheck(GradientReverse.apply, test_input, eps=1e-3)
```

an error occurred

```
RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
```

I am not sure what mistakes did I make when doing the gradient check.

Could someone help me? Thank you in advance!