Pytorch float16-model failed in running

return torch._C._nn.upsample_bilinear2d(input, output_size, align_corners, scale_factors)
RuntimeError: “compute_indices_weights_linear” not implemented for ‘Half’,
pytorch1.9.1 did not support float16?

You could use float16 on a GPU, but not all operations for float16 are supported on the CPU as the performance wouldn’t benefit from it (if I’m not mistaken).

I try running on gpu,Successfully. Thank you very much.

Howerver, I still have a problem with model-int8 on cpu and gpu,

“RuntimeError: “upsample_bilinear2d_out_frame” not implemented for ‘Char’ ”

int8 is not implemented in native operations and you would need to use the quantization util. for it.

Could you explain it more specifically, please? I haven’t understood the quantization util.

I would probably start with the docs and then take a look at this tutorial and the coverage for more information.

1 Like

ok, I have known where the problem is. I should add the quant(x) into the forward headmost.