This is the question about Autograd and Pytorch’s gradient management system.
Basically, my neural network is like the encoder part of VAE - it gets images as its input, and returns a vector R.
With R, compute loss function L= SomeLossFunction( R ).
After performing L.backward(), R will have its own gradient. I mean, there must be something like R.grad.
What I want to do : adjust the R.grad manually, and re-compute all the gradient of all the weight parameters in NN according to this adjusted gradient.
Is this computation possible without fixing autograd system too much?
Not really sure what you want to do, but you could circumvent modifying autograd itself since you can directly access to gradient informations by
sometensor.grad (or maybe there would be some proper way to do this) for Tensors of leaf nodes with requires_grad==True. In my case, when I faced a wall with autograd problem, this youtube vid really helped me a lot (13 min)
btw is it proper to link a youtube here? it is educational one about pytorch.
I’m really sorry. I noticed what I really wanted, thanks to your vid and tutorial - just modifying backward function like Relu solves my problem. Thanks for your help.
Glad to hear that you found your own solution!