Today I found the situation when ByteTensor wrapped in Variable silently overflow, but tensor doesn’t. It’s understandable result, but shouldn’t it warn about such cases? I personally wasted two days debugging this
>>> import torch
>>> from torch.autograd import Variable
>>> t = torch.ByteTensor([240, 240])
>>> v = Variable(t)
[torch.ByteTensor of size 2]
[torch.ByteTensor of size 1]
If you accumulate your Tensor over a given dimension such that it returns a
ByteTensor, it will overflow the same way.
I think this issue will be solved when the Scalar types are added to the autograd.
Yep, I understand that I’m doing a wrong thing here, my problem that such cases could be missed very easily, so, maybe pytorch should warn about such cases to make them more obvious.
But have no idea how it should be done :).
we are fixing this once and for all by introducing scalar types. work is ongoing.