Today I found the situation when ByteTensor wrapped in Variable silently overflow, but tensor doesn’t. It’s understandable result, but shouldn’t it warn about such cases? I personally wasted two days debugging this
>>> import torch
>>> from torch.autograd import Variable
>>> t = torch.ByteTensor([240, 240])
>>> t.sum()
480
>>> v = Variable(t)
>>> v
Variable containing:
240
240
[torch.ByteTensor of size 2]
>>> v.sum()
Variable containing:
224
[torch.ByteTensor of size 1]
If you accumulate your Tensor over a given dimension such that it returns a ByteTensor, it will overflow the same way.
I think this issue will be solved when the Scalar types are added to the autograd.
Yep, I understand that I’m doing a wrong thing here, my problem that such cases could be missed very easily, so, maybe pytorch should warn about such cases to make them more obvious.