Is it possible the edit the weight matrix after the forward pass but before the backward pass?

Hi,

I’m looking to see if one can edit a weight matrix right before backpropagation takes place, but after the forward method is applied, so that the forward weight matrices are not identical to the ones used during backpropagation. To set a simple example,

some_const = torch.tensor([5],requires_grad=False,dtype=torch.float)
other_const = torch.tensor([3],requires_grad=False,dtype=torch.float)

x = torch.tensor([1.000],dtype=torch.float,requires_grad=True)
y = some_const*x**5

with torch.no_grad():
   some_const.copy_(other_const)

y.backward()
print(x.grad) # wish to have 15 here

when I try the above, I get an error:

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [1]] is at version 1; expected version 0 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

I’m not really familiar with the topic, but I have heard that something called feedback alignment is very close to what I’m trying to do, in case that helps.

Any help is greatly appreciated.

Hi,

The problem here is that the original value of some_const was required and you modified it.
Hence the error you’re seeing.

If you want explicitly change that value so that autograd will compute wrong gradients, then I don’t think you should be doing that :smiley: In particular, it is an implementation detail of the different formulas which Tensor is saved and used or not. So you cannot predict what effect changing this Tensor inplace will have because it might not be used in the backward computation at all.

Hi @albanD,

Well, to spill the beans a little bit, I’m working on a method that produces a mask on the weights, which is applied during the backward() method, just for experimentation purposes. The feedback alignment paper does something similar, by using a random weight matrix during backpropagation which is asymmetric to the forward weight matrix. Maybe the small example I provided above doesn’t fully represent the complexity of the task… I’m just curious if it can be done. Could I subclass torch.autograd.Function and implement my own backward() method?

Thanks

If you just want to modify the gradients, you can either add a hook to the Tensor with register_hook() and whatever is returned by that hook will be used as gradient.
Or if it is more involved, you can do that with a custom Function as well.

But modifying the weights data directly is dangerous and might break for no reason in the future.