ValueError: length of all samples has to be greater than 0, but found an element in ‘lengths’ that is <=0.
But I checked the code and data, find now elements is <= 0. what the problem actually be?
ValueError: length of all samples has to be greater than 0, but found an element in ‘lengths’ that is <=0.
But I checked the code and data, find now elements is <= 0. what the problem actually be?
is there a 30 line code snippet that can reproduce this?
I am seeing the same error. checked the data and lengths. all are bigger than zero.
ipdb> embeddings
Variable containing:
( 0 ,.,.) =
9.5425e-02 -9.9179e-02 1.0707e-02 ... -7.6814e-03 1.6964e-02 -5.6413e-02
2.3778e-02 9.7818e-02 6.4073e-02 ... 2.8708e-02 -6.0741e-02 -5.7502e-02
8.3500e-02 -1.6935e-02 4.1593e-02 ... -6.4133e-02 9.3276e-02 7.6222e-02
... ⋱ ...
8.3500e-02 -1.6935e-02 4.1593e-02 ... -6.4133e-02 9.3276e-02 7.6222e-02
1.7874e-02 6.9051e-02 -1.9213e-02 ... -4.9683e-02 3.5582e-02 -7.3659e-03
1.1075e-02 3.9153e-02 -8.9664e-02 ... -5.0775e-03 2.4475e-02 -2.7899e-02
( 1 ,.,.) =
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
... ⋱ ...
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
( 2 ,.,.) =
4.7567e-02 -9.2625e-02 -6.5104e-02 ... 5.4529e-03 -2.8097e-02 1.6113e-02
7.5976e-03 6.8720e-02 6.1340e-02 ... -3.9622e-02 -2.3815e-02 -2.0370e-02
3.9998e-02 7.2032e-02 -1.1872e-02 ... 2.6354e-02 5.3474e-02 3.4333e-02
... ⋱ ...
3.9998e-02 7.2032e-02 -1.1872e-02 ... 2.6354e-02 5.3474e-02 3.4333e-02
6.4244e-03 1.4605e-02 5.8494e-02 ... -3.7417e-02 5.5668e-02 -8.0501e-02
4.7567e-02 -9.2625e-02 -6.5104e-02 ... 5.4529e-03 -2.8097e-02 1.6113e-02
...
(1428,.,.) =
3.1046e-02 1.8106e-02 1.9945e-02 ... 8.2633e-02 8.4120e-02 -9.7189e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
... ⋱ ...
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
(1429,.,.) =
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
... ⋱ ...
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
(1430,.,.) =
8.3446e-02 -9.9876e-02 -4.1541e-02 ... -3.1610e-02 6.2645e-03 -5.3521e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
... ⋱ ...
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
-9.1356e-02 -7.9375e-02 -1.0256e-02 ... -7.6632e-02 8.7251e-02 8.0002e-02
[torch.cuda.FloatTensor of size 1431x20x200 (GPU 0)]
ipdb> input_lengths
Variable containing:
1431
1215
936
918
918
891
873
855
855
819
810
801
792
765
747
729
720
711
693
693
[torch.LongTensor of size 20]
ipdb> c
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-9-b888f18a64ca> in <module>()
----> 1 train(1)
<ipython-input-8-62f20437433e> in train(epoch)
38 decoder_lengths = decoder_lengths,
39 hidden = decoder_hidden,
---> 40 drop_prob = drop_prob)
41
42 return decoder_target, logits
<ipython-input-6-cdd22ad1dcec> in forward(self, encoder_input, encoder_lengths, decoder_input, decoder_lengths, hidden, drop_prob)
88 hidden = None, drop_prob = 0.5):
89 seq_len, batch_size = encoder_input.size()
---> 90 mu, logvar = self.encode(encoder_input, encoder_lengths)
91 z = self.sample(mu, logvar, drop_prob)
92 logits, hidden = self.decode(decoder_input, decoder_lengths, z, hidden, drop_prob, seq_len)
<ipython-input-6-cdd22ad1dcec> in encode(self, encoder_input, input_lengths)
43 _, batch_size, _ = embeddings.size()
44 Tracer()()
---> 45 packed_in = nn.utils.rnn.pack_padded_sequence(embeddings, input_lengths)
46 finalHidden = self.encoderNet(packed_in, batch_size)
47 mu = self.muNet(finalHidden)
/usr/local/lib64/python2.7/site-packages/torch/nn/utils/rnn.pyc in pack_padded_sequence(input, lengths, batch_first)
51 """
52 if lengths[-1] <= 0:
---> 53 raise ValueError("length of all samples has to be greater than 0, "
54 "but found an element in 'lengths' that is <=0")
55 if batch_first:
ValueError: length of all samples has to be greater than 0, but found an element in 'lengths' that is <=0
This was in pytorch 1.12. upgraded to ‘0.3.0.post4’ and the error I am getting with the same data is this:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-13-97b62eefdc79> in <module>()
----> 1 packed_in = nn.utils.rnn.pack_padded_sequence(encoder_input, encoder_lengths)
/usr/local/lib64/python2.7/site-packages/torch/nn/utils/rnn.pyc in pack_padded_sequence(input, lengths, batch_first)
51 a :class:`PackedSequence` object
52 """
---> 53 if lengths[-1] <= 0:
54 raise ValueError("length of all samples has to be greater than 0, "
55 "but found an element in 'lengths' that is <=0")
/usr/local/lib64/python2.7/site-packages/torch/autograd/variable.pyc in __bool__(self)
123 return False
124 raise RuntimeError("bool value of Variable objects containing non-empty " +
--> 125 torch.typename(self.data) + " is ambiguous")
126
127 __nonzero__ = __bool__
RuntimeError: bool value of Variable objects containing non-empty torch.ByteTensor is ambiguous
Any idea where the problem is? @smth
So it seems that this runtime error occurs with pack_padded_sequence when the length argument is a Variable. I used the .data.numpy() of the variable and it worked without this error. This is confusing since the documentation says the lengths should be a variable. http://pytorch.org/docs/master/nn.html#torch.nn.utils.rnn.pack_padded_sequence
@hsaghir you are seeing documentation for master branch. if you switch to v0.3.0 documentation, you’ll get the expected type signatures.