Consider the following code:
from torch.autograd import Variable
t = torch.Tensor()
bool(torch.max(t) < 2)
bool(torch.max(t) < 3)
However, If you do the same with a Variable:
bool(torch.max(v) < 2)
bool(torch.max(v) < 3)
In the mean time I call
v.data to overcome this.
It is because
torch.max(v) returns another Variable containing a Tensor with a single element (so that you can use it and autograd will work as expected) while
torch.max(t) returns a python number.
For conditions (that are not differentiable and thus you don’t want to keep a Variable) you can either do
Well that makes sense. Thanks.