I have a problem (Assign variable to model parameter - #2 by antspy) which could be solved if I could assign a variable to model parameter. But apparently this is not allowed, i.e. model parameters must be parameters, and cannot be variables.
From my understanding, the main differences between Parameters and Variables is that
Parameters have requires_grad = True by default, and Variables don’t
Parameters are not allowed to have any history (i.e. their creator field is None). Actually I am not sure about this, but this example suggested it:
a = torch.nn.Parameter(torch.Tensor([1,2]))
b = a + 1
b
Variable containing:
2
3
[torch.FloatTensor of size 2]
So any operation on a parameter causes it to become a variable.
Why is it implemented this way? This will incidentally prevent me from assigning a variable as a model parameter, but I don’t see why that would be undesirable.
Have a look at this thread for details https://github.com/pytorch/pytorch/issues/143
If you want to model complex models such as HyperNetworks, where the parameters of the convolutions are the outputs of a network, you can use the functional interface.
I liked the idea of decoupling computation and parameters.
But I am new to pytorch, and could not find a way to use the functional interface of recurrent neural networks, if there any? It appears, from another thread, that double backwards in RNNs is not supported.
Is there any workarounds that allow me to generate variable parameters for RNNs?