Consider the following code which works as expected:
In [1]: import torch
In [2]: var = torch.autograd.Variable(torch.LongTensor(0))
In [3]: bool(var)
Out[3]: False
However, if I do the same with a non-empty variable:
In [4]: var = torch.autograd.Variable(torch.LongTensor(1))
In [5]: bool(var)
RuntimeError: bool value of Variable objects containing non-empty torch.LongTensor is ambiguous
It shows an error!
How to handle such scenarios when using an if/else condition etc?
I guess a more appropriate question is: why are you doing bool(var)
?
I am not exactly using bool. I am trying to run facebook’s end to end negotiator code, where the following snippet produces a problem:
if resume:
inpt = None
else:
inpt = Variable(torch.LongTensor(1))
inpt.data.fill_(self.word_dict.get_idx('YOU:'))
inpt = self.to_device(inpt)
for _ in range(max_words):
if inpt: #this condition produces the error when inpt is not None
inpt_emb = torch.cat([self.word_encoder(inpt), ctx_h], 1)
lang_h = self.writer(inpt_emb, lang_h)
lang_hs.append(lang_h)
the if inpt
should be: if inpt is not None
. I think it’s a bug in their code.
4 Likes
I found some related posts:
- https://stackoverflow.com/questions/52946920/bool-value-of-tensor-with-more-than-one-value-is-ambiguous-in-pytorch
- RuntimeError: bool value of Variable objects containing non-empty torch.LongTensor is ambiguous
- RuntimeError: bool value of Tensor with more than one value is ambiguous
- https://discuss.pytorch.org/t/why-cant-one-pass-data-through-a-torch-relu-module-directly
if you are here and you are not constructing an layer or loss before using it that’s why your getting the error. Do:
loss = nn.MSELoss()(y, y_pred)
not
loss = nn.MSELoss(y, y_pred)
very different it seems.