"round" "_vml_cpu" not implemented for 'Char'

I am getting this error while using round method. I printed the tensor type to check, and it prints int8 as it should be. However, error says tensor of type Char.

The Code throwing the exception

 print(self.prob.mul(self.len).dtype)
 self.binary.data = self.prob.mul(self.len).round()

The Expection

torch.int8
Traceback (most recent call last):
  File "SCONNA.py", line 59, in <module>
    uconv2d = FSUConv2duGEMM(
  File "/home/sv/Desktop/Stochastic_Acc_Sim/utils/UnarySimLayers.py", line 552, in __init__
    self.weight.data = SourceGen(
  File "/home/sv/Desktop/Stochastic_Acc_Sim/utils/UnarySimUtils.py", line 504, in __init__
    self.binary.data = self.prob.mul(self.len).round()
RuntimeError: "round" "_vml_cpu" not implemented for 'Char'

Hi Sairam!

First, it doesn’t really make sense to “round” an integral type (as rounding
an integer by definition doesn’t change its value). So pytorch doesn’t
implement .round() for integers.

Second, somewhere under the hood, pytorch uses “Char” as a synonym
for int8 – perhaps a bit confusing, but they mean the same thing.

(If you do the same experiment with dtypes int16, int32, and int64,
you will get analogous error messages naming types “Short,”, “Int,” and
“Long,” respectively.)

Best.

K. Frank

Thanks @KFrank, it makes sense. However, even for float16 it gives the similar error.

RuntimeError: "round" "_vml_cpu" not implemented for 'Half'

Is there a place where the supported dtypes for round method are listed?

Hi Sairam!

Well, that’s unexpected and somewhat annoying. I assume that it’s
just an oversight. (I can reproduce this on versions 1.10 and 1.11.)

Interestingly, it does work on the gpu:

>>> import torch
>>> torch.__version__
'1.10.0'
>>> t = torch.ones (5, dtype = torch.float16)
>>> t
tensor([1., 1., 1., 1., 1.], dtype=torch.float16)
>>> t.round()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
RuntimeError: "round" "_vml_cpu" not implemented for 'Half'
>>> t.cuda().round()
tensor([1., 1., 1., 1., 1.], device='cuda:0', dtype=torch.float16)

I’m not aware of any explicit list. (I would have expected round() to work
for any floating-point type, but, as you have shown, it doesn’t.)

Best.

K. Frank

1 Like

Related upstream issue: RuntimeError: “log2” “_vml_cpu” not implemented for ‘Half’ · Issue #54774 · pytorch/pytorch (github.com)

1 Like