RuntimeError: _thnn_conv2d_forward not supported on CPUType for Byte

RuntimeError                              Traceback (most recent call last)
<ipython-input-100-118ab5f812d5> in <module>()
     29                 optimizer.zero_grad()
     30                 print(anchor)
---> 31                 a_output = model(anchor)
     32                 p_output = model(positive)
     33                 n_output = model(negative)

5 frames
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/conv.py in forward(self, input)
    336                             _pair(0), self.dilation, self.groups)
    337         return F.conv2d(input, self.weight, self.bias, self.stride,
--> 338                         self.padding, self.dilation, self.groups)
    339 
    340 

When I call the model, this error happens. Anyone knows how to solve this?

2 Likes

It seems you are trying to pass an input as a ByteTensor (uint8), which is not supported.
Could you call input = input.float() before passing it to the model?

11 Likes

The problem is solved. Many thanks!

1 Like

We are getting this same error when trying to adapt our repo (https://github.com/ultralytics/yolov3) for FP16 inference. We attempted model.half(), and input.half(), which produces the following error in PyTorch 1.1. Is there a recommended way to do object detection at FP16 inference?

Ironically training works great with FP16 with Nvidia Apex, which is enabled by default in the repo, but only for training. Thank you!

  File "/System/Volumes/Data/Users/glennjocher/PycharmProjects/yolov3/detect.py", line 145, in <module>
    output=opt.output)
  File "/System/Volumes/Data/Users/glennjocher/PycharmProjects/yolov3/detect.py", line 73, in detect
    pred, _ = model(img)
  File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/System/Volumes/Data/Users/glennjocher/PycharmProjects/yolov3/models.py", line 181, in forward
    x = module(x)
  File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
    input = module(input)
  File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
    result = self.forward(*input, **kwargs)
  File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
    self.padding, self.dilation, self.groups)
RuntimeError: _thnn_conv2d_forward not supported on CPUType for Half

Some FP16 operations are not supported for CPUTensors, so you could try to run your code in the GPU instead.

1 Like

bro, how do you solve it, can you tell me?

you can convert your trained model from half precision to full precision as shown below

learn = learn.to_fp32()

1 Like

It works! Thanks!! And I also changed cell type to TPU from CPU.

so there’s no workaround to have all fp16 operations working on cpu as well? will this issue be addressed in the future?

2 Likes