torch.Tensor giving error

I was trying to do the following

import torch
print(torch.Tensor(2,3))

I don’t know why but, some times it works and gives an output, and sometimes it throw’s out the following error.

RuntimeError Traceback (most recent call last)
D:\softwares\anaconda\lib\site-packages\IPython\core\formatters.py in call(self, obj)
700 type_pprinters=self.type_printers,
701 deferred_pprinters=self.deferred_printers)
–> 702 printer.pretty(obj)
703 printer.flush()
704 return stream.getvalue()

D:\softwares\anaconda\lib\site-packages\IPython\lib\pretty.py in pretty(self, obj)
398 if cls is not object
399 and callable(cls.dict.get(‘repr’)):
–> 400 return _repr_pprint(obj, self, cycle)
401
402 return _default_pprint(obj, self, cycle)

D:\softwares\anaconda\lib\site-packages\IPython\lib\pretty.py in repr_pprint(obj, p, cycle)
693 “”“A pprint that just redirects to the normal repr function.”""
694 # Find newlines and replace them with p.break
()
–> 695 output = repr(obj)
696 for idx,output_line in enumerate(output.splitlines()):
697 if idx:

D:\softwares\anaconda\lib\site-packages\torch\tensor.py in repr(self)
55 # characters to replace unicode characters with.
56 if sys.version_info > (3,):
—> 57 return torch._tensor_str._str(self)
58 else:
59 if hasattr(sys.stdout, ‘encoding’):

D:\softwares\anaconda\lib\site-packages\torch_tensor_str.py in _str(self)
216 suffix = ‘, dtype=’ + str(self.dtype) + suffix
217
–> 218 fmt, scale, sz = _number_format(self)
219 if scale != 1:
220 prefix = prefix + SCALE_FORMAT.format(scale) + ’ ’ * indent

D:\softwares\anaconda\lib\site-packages\torch_tensor_str.py in _number_format(tensor, min_sz)
94 # TODO: use fmod?
95 for value in tensor:
—> 96 if value != math.ceil(value.item()):
97 int_mode = False
98 break

RuntimeError: Overflow when unpacking long

Can anyone tell me the reason for this behaviour.

My thought was that, torch.Tensor(2,3) is similar to creating an uninitialized tensor which is same as torch.empty(2,3). If this is correct, then https://github.com/pytorch/pytorch/issues/6339 has the solution. If not, can anyone help me out here

You are most likely right and this should be already fixed.
Which PyTorch version are you using?

hi, i am using 0.4 version of pytorch

It seems this bug is fixed in the master branch, although I couldn’t reproduce the issue using 0.4.0.
Maybe I was just lucky with the uninitialized values.

However, you can just use the tensor as you wish. Just avoid printing an uninitialized tensor.

1 Like

I guess torch.Tensor just creates memory on the device (cpu or gpu) and you are trying to print an uninitialized tensor. Assign some values and then print. It should work.