Update layer parameters manually during forward pass

Hi all, I would like to know whether it is possible to update the parameters of a layer manually during forward pass of a network. I need to calculate and update the parameters of a layer based on a previous layer output.
Thank you.

You can do that (before you use the layer) if you want and then use something like

with torch.no_grad():
   # you might also do the computation of new_value  here
   self.nex_layer.my_param.copy_(new_value)

or so.

Best regards

Thomas

@tom thanks for the reply. Are you suggesting to do this within the training loop? When I use torch.no_grad() wrapper, how is it going to affect the backpropagation of the whole network?

No, sorry, I edited the answer to clarify. So the gradient would be w.r.t. the new value of my_param and you could compute the new_value in the no_grad block.
The remaining computation - the bit that needs to be backproped - would not be in no_grad, of course.

Best regards

Thomas