I have a problem (Assign variable to model parameter) which could be solved if I could assign a variable to model parameter. But apparently this is not allowed, i.e. model parameters must be parameters, and cannot be variables.
From my understanding, the main differences between Parameters and Variables is that
- Parameters have requires_grad = True by default, and Variables don’t
- Parameters are not allowed to have any history (i.e. their creator field is None). Actually I am not sure about this, but this example suggested it:
a = torch.nn.Parameter(torch.Tensor([1,2]))
b = a + 1
[torch.FloatTensor of size 2]
So any operation on a parameter causes it to become a variable.
Why is it implemented this way? This will incidentally prevent me from assigning a variable as a model parameter, but I don’t see why that would be undesirable.
Have a look at this thread for details https://github.com/pytorch/pytorch/issues/143
If you want to model complex models such as HyperNetworks, where the parameters of the convolutions are the outputs of a network, you can use the functional interface.
weight = net(input)
output = F.conv2d(input, weight)
thanks for your answer!
So to summarize, they made parameters leaf nodes (i.e. stateless) so that when saving and using modules we don’t have to worry about the state. Right?
It’s a matter of giving flexibility while keeping the returned value of
.parameters() consistent with what you’d expect
I liked the idea of decoupling computation and parameters.
But I am new to pytorch, and could not find a way to use the functional interface of recurrent neural networks, if there any? It appears, from another thread, that double backwards in RNNs is not supported.
Is there any workarounds that allow me to generate variable parameters for RNNs?