Hi,
I’m looking to see if one can edit a weight matrix right before backpropagation takes place, but after the forward method is applied, so that the forward weight matrices are not identical to the ones used during backpropagation. To set a simple example,
some_const = torch.tensor([5],requires_grad=False,dtype=torch.float)
other_const = torch.tensor([3],requires_grad=False,dtype=torch.float)
x = torch.tensor([1.000],dtype=torch.float,requires_grad=True)
y = some_const*x**5
with torch.no_grad():
some_const.copy_(other_const)
y.backward()
print(x.grad) # wish to have 15 here
when I try the above, I get an error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1]] is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).
I’m not really familiar with the topic, but I have heard that something called feedback alignment is very close to what I’m trying to do, in case that helps.
Any help is greatly appreciated.