It seems you are trying to pass an input as a ByteTensor (uint8), which is not supported.
Could you call input = input.float() before passing it to the model?
We are getting this same error when trying to adapt our repo (https://github.com/ultralytics/yolov3) for FP16 inference. We attempted model.half(), and input.half(), which produces the following error in PyTorch 1.1. Is there a recommended way to do object detection at FP16 inference?
Ironically training works great with FP16 with Nvidia Apex, which is enabled by default in the repo, but only for training. Thank you!
File "/System/Volumes/Data/Users/glennjocher/PycharmProjects/yolov3/detect.py", line 145, in <module>
output=opt.output)
File "/System/Volumes/Data/Users/glennjocher/PycharmProjects/yolov3/detect.py", line 73, in detect
pred, _ = model(img)
File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/System/Volumes/Data/Users/glennjocher/PycharmProjects/yolov3/models.py", line 181, in forward
x = module(x)
File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/container.py", line 92, in forward
input = module(input)
File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 493, in __call__
result = self.forward(*input, **kwargs)
File "/Users/glennjocher/anaconda/envs/yolov3/lib/python3.7/site-packages/torch/nn/modules/conv.py", line 338, in forward
self.padding, self.dilation, self.groups)
RuntimeError: _thnn_conv2d_forward not supported on CPUType for Half