Hi, I trained my UNET model with images with size 320*320, I want to test(predict for new dataset) my model using a dataset with different image sizes, I heard that is possible but I got an error for mismatching the dimension sizes.
I have used model.eval() mode.
error:
torch.Size([1, 3, 813, 1024])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
in
21 img.cuda()
22 print(img.shape)
---> 23 out = model(img)
24 temp = torch.Tensor.cpu(x).detach().np()
25 print(type(temp), temp.shape)
C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
in forward(self, inputs)
48
49 center = self.center(maxpool4) # 256*32*32
---> 50 up4 = self.up_concat4(center,conv4) # 128*64*64
51 up3 = self.up_concat3(up4,conv3) # 64*128*128
52 up2 = self.up_concat2(up3,conv2) # 32*256*256
C:\Anaconda3\envs\torchgpu\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
in forward(self, high_feature, *low_feature)
53 outputs0 = self.up(high_feature)
54 for feature in low_feature:
---> 55 outputs0 = torch.cat([outputs0, feature], 1)
56 return self.conv(outputs0)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 100 and 101 in dimension 2 at ..\aten\src\TH/generic/THTensor.cpp:711
Please help, Thanks