I want to resize my image tensor using the function as:
torch.nn.functional.upsample(input, size=None, scale_factor=None, mode=‘nearest’, align_corners=None)
where my statement is as follows:
image =image.view(1,3,h,w)
resizedimg = F.upsample(image, size=(nw,nh),mode = ‘bilinear’)
when I print (resizedimg.shape), it shows that resizedimg is single channle, but I input a 3 channel tensor, I do not understand why output become single channel.
and I read in help document that size could be Tuple[int, int, int]), but when I set size as : size=(3,nw,nh), It prompt that size only could be 2 dimesion.
Could anybody help me?