Torch tensor print

Dear torchers,

I have a short comment/thought on printing pytorch tensors. The thing is that usually these tensors are big, and print function in python shows a shortened version of the tensor, which makes sense.
For example:
embs_txt = tensor([[ 0.0010, 0.0058, -0.0007, …, 0.0090, -0.0016, 0.0083],
[-0.0112, 0.0075, -0.0007, …, 0.0104, 0.0091, 0.0010],
[-0.0016, 0.0067, -0.0073, …, 0.0081, 0.0051, 0.0004],
[ 0.0019, 0.0009, -0.0082, …, 0.0181, 0.0193, 0.0007]],
device=‘cuda:0’)

However, I would like to suggest to remove rounding of the weights itself, because sometimes you check your code and want to have a short comparison if order, or weights itself match. So you compare such prints with weights recoded in a file, for example. In that case, one may find that the unrolled weights have no such weights that starts with 0.0010 or ends with 0.0007. The thing is that these numbers are rounded. I think from time to time it may confuse a bit, especially newcomers.
Please, share what you think.

Cheers,
V.D.

You could use torch.set_printoptions(precision=...) to increase the printed precision of the values in case that’s needed.

1 Like