When running model on server I get the following error:
I use
F.interpolate(g, size=input_size[2:], mode=upsample_mode)
in the forward method
and get following error when running script on the server:
File “/usr/local/lib/python3.10/dist-packages/torch/_decomp/decompositions.py”, line 2821, in upsample_bilinear2d x_ceil = torch.ceil(x).clamp(max=in_h - 1).to(torch.int64)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument max in method wrapper_CUDA_clamp_Tensor)
When running the same model locally on GPU this error does not appear.
Also this error seems to not appear when running on the server, but directly specifying size, e.g: F.interpolate(g, size=(64,64), mode=upsample_mode), but I need size to be calculated and not specified.
In the following thread (`F.interpolate` uses incorrect size when `align_corners=True` · Issue #76487 · pytorch/pytorch · GitHub) I have seen suggestion: `F.interpolate` uses incorrect size when `align_corners=True` · Issue #76487 · pytorch/pytorch · GitHub
Do you know any reason for this problem or any workaround for this? I do not think that this related to moving model or tensor to cuda.