My input A is C×H×W. And I want to use torch.nn.Upsample to resize it.
I know the format should be Batch×C×H×W ,so I do:
self.upsample = torch.nn.Upsample(size=[128,128], mode=‘bilinear’)
A = torch.unsqueeze(A, 0) (to get 1×C×H×W)
A = self.upsample(A)
However, it reports:
raise NotImplementedError(“Got 3D input, but bilinear mode needs 4D input”)
NotImplementedError: Got 3D input, but bilinear mode needs 4D input
What is wrong with my code?Can someone help me? Thanks!
The size of tensors among the bacth I want to resize are different. For each one I have to resize it respectively. I can not resize them for one batch directly. Any way to solve it?
I’m not sure I understand this issue.
Your tensors have a different spatial shape for every batch?
This should not be a problem as upsample will reshape it to [batch, channels, 128, 128].
For example, the input is 16×3×128×128 (batchsize=16).For each one among the batch ,I will crop one patch of different size respectively, that is my patch_up_A_temp. And I want to resize all the patch to the same size(64,64) and finally, for this batch, I get an output of 16×3×64×64.