Warning on Variable overflow situations?

Hi!

Today I found the situation when ByteTensor wrapped in Variable silently overflow, but tensor doesn’t. It’s understandable result, but shouldn’t it warn about such cases? I personally wasted two days debugging this :slight_smile:

>>> import torch
>>> from torch.autograd import Variable
>>> t = torch.ByteTensor([240, 240])
>>> t.sum()
480
>>> v = Variable(t)
>>> v
Variable containing:
 240
 240
[torch.ByteTensor of size 2]
>>> v.sum()
Variable containing:
 224
[torch.ByteTensor of size 1]

If you accumulate your Tensor over a given dimension such that it returns a ByteTensor, it will overflow the same way.
I think this issue will be solved when the Scalar types are added to the autograd.

Yep, I understand that I’m doing a wrong thing here, my problem that such cases could be missed very easily, so, maybe pytorch should warn about such cases to make them more obvious.

But have no idea how it should be done :).

we are fixing this once and for all by introducing scalar types. work is ongoing.

1 Like