Why can't model parameters be variables?

Hello,

I have a problem (Assign variable to model parameter - #2 by antspy) which could be solved if I could assign a variable to model parameter. But apparently this is not allowed, i.e. model parameters must be parameters, and cannot be variables.

From my understanding, the main differences between Parameters and Variables is that

  1. Parameters have requires_grad = True by default, and Variables don’t
  2. Parameters are not allowed to have any history (i.e. their creator field is None). Actually I am not sure about this, but this example suggested it:

a = torch.nn.Parameter(torch.Tensor([1,2]))
b = a + 1
b
Variable containing:
2
3
[torch.FloatTensor of size 2]

So any operation on a parameter causes it to become a variable.
Why is it implemented this way? This will incidentally prevent me from assigning a variable as a model parameter, but I don’t see why that would be undesirable.

Thank you!

1 Like

Have a look at this thread for details https://github.com/pytorch/pytorch/issues/143
If you want to model complex models such as HyperNetworks, where the parameters of the convolutions are the outputs of a network, you can use the functional interface.

weight = net(input)
output = F.conv2d(input, weight)
3 Likes

Hi,

thanks for your answer!

So to summarize, they made parameters leaf nodes (i.e. stateless) so that when saving and using modules we don’t have to worry about the state. Right?

It’s a matter of giving flexibility while keeping the returned value of .parameters() consistent with what you’d expect

1 Like

I liked the idea of decoupling computation and parameters.

But I am new to pytorch, and could not find a way to use the functional interface of recurrent neural networks, if there any? It appears, from another thread, that double backwards in RNNs is not supported.

Is there any workarounds that allow me to generate variable parameters for RNNs?

2 Likes