I want to create a learnable input matrix. So I create it as nn.Parameter. But I want to use a relu to make it non-negative and a softmax to make it normalized.
self.weight = nn.Parameter(torch.randn(10, 10))
def forward(self, x):
self.weight = F.softmax(self.weight.relu(), dim=1)
The error is
cannot assign ‘torch.FloatTensor’ as parameter ‘weight’ (torch.nn.Parameter or None expected)
So what is the right way to do it?
I am not sure of the context on what you are trying to achieve.
I guess, you do not need to assign
self.weight again. It is already learnable and it will accumulate gradients when you backpropagate.
I want to create an adjacency matrix in GNN and want it to be learnable.
If I do not assign it in
forward again, where should I apply
I see. One way is that, you can consider
self.weight as logits and apply
forward to get the adjacency matrix. (not sure why do we need
relu), then continue with GNN operation.
But you do not need to assign it back to
self.weight, as the logits will be learned when you
Oh, yeah. Now I find this question is stupid. I should assign it to another variable. Thanks for your reply