It is because torch.max(v) returns another Variable containing a Tensor with a single element (so that you can use it and autograd will work as expected) while torch.max(t) returns a python number.
For conditions (that are not differentiable and thus you don’t want to keep a Variable) you can either do torch.max(v.data) or torch.max(v).data[0].