# ValueError: pic should be 2/3 dimensional. Got 4 dimensions

I want to realize the function about transferring tensor to image. The tensor is derived from the network and the size is [2, 21 ,400 ,400]. And then I utilize transforms.ToPILImage to convert the image. But I get error information about " raise ValueError(‘pic should be 2/3 dimensional. Got {} dimensions.’.format(pic.ndimension()))
ValueError: pic should be 2/3 dimensional. Got 4 dimensions."
And I’ve tried this dimensionality reduction method(squeeze), but it still doesn’t work.

What your tensor dimensions stand for? I presume that 2nd and 3rd is width and height? Have you read the docs about toPILImage? There is everything you need to know.

You have to have your tensor (as error message states) in 2 (H x W) or 3 (C x H x W) dimensions. 2 If the image is greyscale, ad 3 if the image is in color. The additional dimension (that probably stands for batch size?) has to be removed, i.e. you can transform the tensor in a loop.

Thanks.
My tensor is [2,21,400,400] 2 is batch_size, 21 is the output channel of network and 400,400 is image size.

Can you post a minimal code example of what have you tried?

``````for data in test_loader:
# data=data.unsqueeze(0)
output=model(data)
print(output.shape[0],output.shape[1],output.shape[2],output.shape[3])
print(output)
output1=output.squeeze(0)
tensor_to_img(output1)
def tensor_to_img(tensor):
# image=tensor.cpu().clone()
image=torch.squeeze(tensor)
image=transforms.ToPILImage()(image)
image.show()
``````

I mean that I want to change the tensor’s dimensions to three-dimension

If you are free or you know how to solve the problem,
Thanks a lot!

You have 21 channels instead of 1/3 as required (grayscale/rgb). Are you using a segmentation network?

I use deeplab v3+.
I just want to the result of the network.

``````for data in test_loader:
# data=data.unsqueeze(0)
output=model(data)
print(output.shape[0],output.shape[1],output.shape[2],output.shape[3])
print(output)
output1=output.squeeze(0)
tensor_to_img(output1)
def tensor_to_img(tensor):
# image=tensor.cpu().clone()
image=torch.squeeze(tensor)
image=transforms.ToPILImage()(image)
image.show()
``````

you can look at the code about “output=model(data)
print(output.shape[0],output.shape[1],output.shape[2],output.shape[3])”.

And I also have tried to add another con2d to make three channels.But it doesn’t work .

``````    for data in test_loader:
# data=data.unsqueeze(0)
output=model(data)
x=nn.Conv2d(21,3,1)
output=x(output)
print(output.shape[0],output.shape[1],output.shape[2],output.shape[3])
print(output)
# output1=output.reshape(42,400,400)
# print(output1.shape[0],output1.shape[1],output1.shape[2])
tensor_to_img(output)
``````

The error information is “raise ValueError(‘pic should be 2/3 dimensional. Got {} dimensions.’.format(pic.ndimension()))
ValueError: pic should be 2/3 dimensional. Got 4 dimensions.” as the same last.

Look at this, is not that simple to transform the output to rgb image, you have to do some transformation before

Thanks！ I will try it!
Thanks again.