A question about gradcheck

I have written a custom module and I found that if I check its gradient computation of this module alone, the result is False, the numerical and analytical result is not the same, but when I add one conv_layer before it and one conv_layer after it, the gradient check is True. Is there any error in checking process?
Any advice would be appreciated !

Are you using FloatTensors or DoubleTensors?
Note that the default values are designed for DoubleTensors.

I think this is not the reason, I write the code following the pytorch example “https://pytorch.org/tutorials/advanced/numpy_extensions_tutorial.html”. I wonder how the pytorch can calculate gradient of any custom module correctly, and if this can be calculated correctly, why I need to write the backward function instead of using autograd mechanics?

If you are just using PyTorch operations, Autograd will be able to calculate the backward pass automatically (at least for most PyTorch operations).
You just need to write your own backward method, if you’re using non-PyTorch functions e.g. from numpy.

OK, thank you:grinning:

If in this case, how would this method (functions from numpy) be executed on GPU?

Unfortunately, numpy operations can’t run on GPU so you would need to call them on CPU arrays.

I see. I thought it would be handled by PyTorch internally : )