I recently looked at Numpy dtypes (https://stackoverflow.com/questions/56022497/numpy-pytorch-dtype-conversion-compatibility/56022918?noredirect=1#comment98695989_56022918), more precisely how they are cast during expression evaluation.
When testing with Pytorch, I realize the same cast was not performed, and that I was getting a runtime error. Here is how I tested it:
>>> import torch >>> import numpy >>> x = torch.zeros(, dtype=torch.int32) >>> y = torch.zeros(, dtype=torch.float32) >>> x + y Traceback (most recent call last): File "<stdin>", line 1, in <module> RuntimeError: expected type torch.FloatTensor but got torch.IntTensor >>> x = numpy.zeros(, dtype=numpy.int32) >>> y = numpy.zeros(, dtype=numpy.float32) >>> x + y array([0., 0., 0., 0.]) >>> (x + y).dtype dtype('float64')
- Pytorch is not doing any cast and expect both side of the operation to have the same dtype. Is this statement right ?
- Is there any plan to follow numpy’s design or will it stay like this ?