Best way to directly modify network weights with gradient

Hello PyTorch Community,

I am trying to efficiently implement the following, but I am finding it difficult. I would greatly appreciate your help.

I am simply trying to perturb the weights of a neural network, perform a forward pass, and then calculate the gradients with respect to the original weights. In math, I have $f_{w}$ (a standard nn.Module with parameters $w$) and I would like to calculate the gradient with respect to $w$ of $f_{w + e}(x)$, where $e$ is some fixed vector (that does not require gradient) and $x$ is some fixed data.

I feel like this should be very easy to do efficiently, but I am finding it surprisingly difficult. I may be overlooking a simple solution, so I am asking here.

Thank you very much for your assistance.

Hello, has anyone tried to do this before? It seems like it should be somewhat common.

Sounds like you want to register a parametrization -
https://pytorch.org/docs/master/generated/torch.nn.utils.parametrize.register_parametrization.html?highlight=parametrization#torch.nn.utils.parametrize.register_parametrization

This requires PyTorch version >= 1.9 though.

Hi soulitzer, thanks for your reply.

This looks absolutely amazing! I have not been this excited for a while. I will try it out and report back here with my experience.

1 Like