About autograd of Variable

When I convert a variable to numpy first, then to change through the function of numpy, and finally to
convert to variable of pytorch and as an input of the neural network, so that the reverse autograd of the original variable can not be grad? If I want to make a derivative of my previous variable, what should I do?


The autograd engine only support pytorch’s operations. You cannot use numpy operation if you want gradients to be backpropagated.

Thank you very much. This question has been bothering me for a long time.