Variables are deprecated since PyTorch 0.4.0 (so for 4 years now ). nn.Parameters wrap tensors and are trainable. They are initialized in nn.Modules and trained afterwards.
If you are writing a custom module, this would be an example how nn.Parameter is used:
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.param = nn.Parameter(torch.randn(1, 1))
def forward(self, x):
x = x * self.param
return x
model = MyModel()
print(dict(model.named_parameters()))
# {'param': Parameter containing:
# tensor([[0.6077]], requires_grad=True)}
out = model(torch.randn(1, 1))
loss = out.mean()
loss.backward()
print(model.param.grad)
# tensor([[-1.3033]])
At the most basic technical level, Variable has been absorbed by Tensor.
In the old days (pre-deprecation), a Variable wrapped a Tensor,
adding the structure (such as the requires_grad property) necessary
for the Tensor to participate in autograd. Now Tensor has that
structure build in directly. (The cost in overhead is negligible if you
don’t use that structure.)
So you can use autograd to train a Tensor without further ado.
Parameter wraps Tensor to help Modules and Optimizers keep
track of what Tensors you want to train. If you have a trainable Tensor in a Module you will typically want to wrap it in a Parameter
so that, for example, it will show up automatically when you call my_module.parameters(), a useful convenience. (But, again, it
doesn’t need to be a Parameter to work with autograd.)