Is there a way to have each layer in a network class do a weight update during its forward function? I am developing local learning algorithms and they need to update weights at every layer, and they don’t require gradient descent.
I dont see anything about doing this on the documentation. Is there a way to do this, or would I be better off with a different framework?
I’m not sure if I completely understand your use case, so feel free to correct me in case I’m missing something.
I assume this means you are never calling .backward() on any tensor associated with the model’s output tensor(s). In this case, your parameter update logic would compute the actual updates or new parameters via another approach.
If so, you should be able to manipulate the parameters in the forward directly by copying the new data to the corresponding parameter. Something like this would work:
Note that I explicitly wrap the copy into a no_grad() block so that Autograd will ignore it.
If you are not interested in Autograd at all, you could also disable it globally via torch.autograd.set_grad_enabled(False) and would not need this context manager anymore.