Gradcheck failing on PyTorch built-ins and custom loss functions

I’ve just implemented a new decorrelation penalty function in Torch, and I’m trying to check the gradient using torch.autograd.gradcheck. It failed, so I checked my previously implemented functions that were passing gradcheck. Those failed too.

I got curious and started testing torch.inverse, torch.mm, and a few other functions. Every single one of them fails gradcheck.

Is there something wrong with gradcheck in version 0.3.0.post4?

gradcheck has a precision argument. If your function generates large gradients, then the error will be larger and might pass the precision threshold of 1e-6.

Even with eps=1e-3, this code is still failing

import torch
x = torch.autograd.Variable(torch.randn(40, 20), requires_grad=True)
y = torch.autograd.Variable(torch.randn(20, 30), requires_grad=True)
res = torch.autograd.gradcheck(torch.mm, (x, y), eps=1e-3)
print(res)

Looks like you are using FloatTensor’s. I believe gradcheck is designed only to work with DoubleTensors

Try adding .double() to the model and the Variable, and you should find it works OK.

2 Likes