Manually set gradient of tensor that is not being calculated automatically

@albanD I tried using the gradcheck. But it is returning an error.

RuntimeError: Jacobian mismatch for output 0 with respect to input 0,
numerical:tensor([[    0.,     0.,     0.,     0.,     0.],
        [    0.,     0.,     0., -5000.,     0.],
        [    0., -5000.,     0.,  5000.,     0.],
        [    0.,     0.,     0.,     0.,     0.],
        [    0.,     0.,     0.,  5000.,  5000.]])
analytical:tensor([[-0.0346, -0.0000, -0.0000, -0.0000, -0.0000],
        [-0.0000, -0.0487, -0.0000, -0.0000, -0.0000],
        [-0.0000, -0.0000, -0.0312, -0.0000, -0.0000],
        [ 0.0000,  0.0000,  0.0000,  0.6791,  0.0000],
        [-0.0000, -0.0000, -0.0000, -0.0000, -0.4282]])

Following your reply to this post, I have created a small input for the gradcheck function.

input = (torch.randn(5,1,requires_grad=True), torch.tensor(1), torch.randn(5,1,requires_grad=True), torch.tensor(0.7))
test = torch.autograd.gradcheck(custom_back_method, input, eps=1e-4, atol=1e-6)
print(test)

I have also tried to increase eps as I am using single precision following your suggestion from this post. Please note, I also have tried double precision, which yields similar results. If I set both eps and atol to a very large value like 1.0, then gradcheck yields true. I think that is simply because of higher tolerance and high eps.

Now it seems to me that the analytical gradient makes more sense than the numerical one. Numerical gradient is too large and does not maintain the Identity, which as far as I understand, should. Do you have any suggestions for me?