TypeError after assigning nn.parameter to regular tensor

I am experiencing a strange behaviour where after assigning a nn.parameter to a tensor, this tensor becomes of type nn.parameter as well which leads to a TypeError in future assignments:

Cannot assign 'torch.FloatTensor' as parameter 'weight_current' torch.nn.Parameter or None expected)

This problem appears to be generated by the following code-snippet in one of my modules:

class MyLayer(torch.nn.Linear):

    def forward(self, input, sample):
        if sample:
	    self.weight_current = self.weight + torch.randn_like(self.weight)
	    self.bias_current = self.bias + torch.randn_like(self.bias)
	else:
	    self.weight_current = self.weight
	    self.bias_current   = self.bias

	return torch.nn.functional.linear(input, self.weight_current, self.bias_current)

That is, after calling forward with sample=False, I cannot call it with sample=True anymore, as this raises the TypeError.

It seems odd to me that this is happening at all. Can somebody explain why pytorch behaves the way it does and what I can do about it?

That’s because assigning a Parameter to an attribute of an nn.Module has a special meaning: register new weights.
Do you need to put weight_current in self.? Removing the self. will solve the issue.

1 Like

Okay, that is what I suspected. Unfortunately, I need to keep weight_current as an attribute of the class, so removing self. is not an option. Is there anything else I can do?

Is there a particular reason pytorch decided to implicitly register a new Parameter after assignment of a Parameter to an attribute in a nn.Module derived class? It seems to me that in most cases it would be preferable to do that explicitly.

Because in the init, you always do self.weight = nn.Parameters(foo).
A workaound is to reset the current weight at the beginning of the forward with:
del self.weight_current and del self.bias_current.

1 Like