Is it possible to modify module weights with a backward hook or would it mess up the gradients? If not, is there a way to modify a specific module’s weights without having to search through the entire network using apply
? I’m thinking of something like clipping weights with a backward hook.
Hi Nick,
From the little I have learnt about autograd while converting the simplest functions, I would expect that it does not mess up the gradient calculation (which stores its needs in a context). It might not make much sense to do this during the backward pass and then apply the optimizer as usual, but that would depend on what you have in mind.
You can access the parameters as attributes of the module. The documentation for the torch nn modules lists them under Variables, e.g. for torch.nn.Linear.
These are Parameters, so you can manipulate their .data field. The original Wasserstein GAN implementation does this to clip the weights, although they loop over the parameters to catch all of them.
I hope this helps.
Best regards
Thomas
ah yes duh… Would be stupid to change the weights before the optimizer step. Thanks for the response !