I’ve just implemented a new decorrelation penalty function in Torch, and I’m trying to check the gradient using torch.autograd.gradcheck. It failed, so I checked my previously implemented functions that were passing gradcheck. Those failed too.
I got curious and started testing torch.inverse, torch.mm, and a few other functions. Every single one of them fails gradcheck.
Is there something wrong with gradcheck in version 0.3.0.post4?
gradcheck has a precision argument. If your function generates large gradients, then the error will be larger and might pass the precision threshold of 1e-6.