I am using some of the functions provided by torch.nn.functional
. The input to the function is a Tensor
and I expect that the output is also a Tensor
. But it seems that some of the functions will output torch.autograd.Variable
, which seems strange.
For example, if the input x
is a tensor, nn.functional.relu(x) will return a Variable
, while function like nn.functional.normalize(x)
will return a Tensor
.
Why is there a distinction? What is the rationale behind this?