Torch.transpose() changes a nn.Parameter to Variable

I have to use a tied weight between an Embedding layer and a linear layer
emb_mod = nn.Embedding(input_size, emb_size)
linear_mod = nn.Linear(emb_size, input_size)
When I tried to do
linear_mod.weight = torch.transpose(emb_mod.weight, 0, 1)
It returned error saying torch.transpose(emb_mod.weight, 0, 1) is a Variable, no longer a nn.Parameter, however linear_mod.weight only accept Parameter assignment.
How can I achive this weight tying? Thanks

You should use the functional interface here since you need to build your parameters by your self. See nn.functional.linear

Thanks a lot. But if I use F.linear(x, W)
Should I define the W as a Parameter or a Variable?
If I define it as a Parameter, when I do the tying/transpose to use it to replace embedding mod’s weight:
Emdmod.weight = W.t()
The transposed thing will no longer be a Parameter but become a Variable, and the problem still exists.

I meant something like:

y = F.linear(x, emb_mod.weight.t())

So you don’t need to assign something back to a module ever :slight_smile:. And in generally you shouldn’t assign to a Module’s parameter in an optimization loop.