Possible to do weight updates in forward pass?

Is there a way to have each layer in a network class do a weight update during its forward function? I am developing local learning algorithms and they need to update weights at every layer, and they don’t require gradient descent.

I dont see anything about doing this on the documentation. Is there a way to do this, or would I be better off with a different framework?

I’m not sure if I completely understand your use case, so feel free to correct me in case I’m missing something.

I assume this means you are never calling .backward() on any tensor associated with the model’s output tensor(s). In this case, your parameter update logic would compute the actual updates or new parameters via another approach.
If so, you should be able to manipulate the parameters in the forward directly by copying the new data to the corresponding parameter. Something like this would work:

def forward(self, x):
    ...
    new_param = self.calculate_new_parameter(self.param)
    with torch.no_grad():
        self.param.copy_(new_param)
    ...

Note that I explicitly wrap the copy into a no_grad() block so that Autograd will ignore it.
If you are not interested in Autograd at all, you could also disable it globally via torch.autograd.set_grad_enabled(False) and would not need this context manager anymore.

1 Like

Thanks for the quick reply. I think this is what I wanted to do. Basically, calculating new params plus outputs and then updating the weight param and still have the output go into the next layer.

I am researching Hebbian learning methods and have been working with my own scuffed framework. Thanks for the input.