Gradcheck for custom loss function

Hi,

I’d like to implement a gradient difference loss with pytorch. However, when I try to use gradcheck to check whether it is correctly implemented, it always shows that the loss function is not correctly implemented.
Can anyone helps?

I’m not sure that there is much that can go wrong when you’re using forwards (except breaking the graph), but gradcheck usually is very sensitive to numerical stability and should always be used on double (you can call m.double() to convert parameters, similar to m.cuda()).

Best regards

Thomas

Thanks.

I will try it.