Too many values to unpack from tensor

i got the shape of the image like

torch.Size([1, 1, 3, 319, 256])

due to that, i got

 B, C, H, W = x.shape
ValueError: too many values to unpack (expected 4)

my code is

inference_transform = T.Compose(
    [
        T.Resize(256),
        T.ToTensor(),
        T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),  # Imagenet
    ]
)
img = load_img("127535.jpg")        
img = inference_transform(img.convert("RGB"))    
img = img.to(DEVICE)         
img = img.reshape((1,img.shape[0], img.shape[1], img.shape[2]))   
img = img.unsqueeze(0)    
print(img.shape)    
output = model(img)

in which place I should change these dimensions ? or is there anything I can add to this code to make the dimensions right ?

Let’s go through your code.

img = inference_transform(img.convert("RGB")) 

So, you have loaded and transformed an image. Your image should have a shape of channel, height, width.

Now, let’s see here,

img = img.reshape((1,img.shape[0], img.shape[1], img.shape[2])) 

When you perform reshape operation img.reshape((1,img.shape[0], img.shape[1], img.shape[2])), the shape becomes 1, channel, height, width, where 1 at the beginning acts like the batch. Now, your image is a 4d tensor and has a shape of batch, channel, height, width. Your image is now ready for your model, assuming it works with images with shape batch, channel, height, width.

However, here’s a problem,

img = img.unsqueeze(0)

When you perform img.unsqueeze(0), it creates another dimension at 0th index. Now, your image has become a 5d tensor with shape batch, number_of_frames, channel, height, width.

So, when you do this,

B, C, H, W = x.shape

You are trying to unpack 5 values into 4 placeholders (Remember, x has 5 values now!).

So, just remove the reshape line or unsqueeze line and you should be ok.

img = img.reshape((1,img.shape[0], img.shape[1], img.shape[2]))   #<------ this
img = img.unsqueeze(0) #<------ or this

I have a crush on unsqueeze operation (haha), so I would suggest commenting out the reshape line.

Cheers!

2 Likes

Thanks for replying … i leave one of this lines but got another error

img = img.reshape((1,img.shape[0], img.shape[1], img.shape[2]))   #<------ this
img = img.unsqueeze(0) #<------ or this

but got another error

output = model(img)
File “/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(*input, **kwargs)
File “/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/container.py”, line 119, in forward
input = module(input)
File “/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 889, in _call_impl
result = self.forward(*input, **kwargs)
File “/home/user/.local/lib/python3.6/site-packages/torch/nn/modules/dropout.py”, line 58, in forward
return F.dropout(input, self.p, self.training, self.inplace)
File “/home/user/.local/lib/python3.6/site-packages/torch/nn/functional.py”, line 1076, in dropout
return VF.dropout(input, p, training) if inplace else _VF.dropout(input, p, training)
TypeError: dropout(): argument ‘input’ (position 1) must be Tensor, not tuple

This error comes from the model code. The input to your Dropout layer is not a tensor, but a list or tuple. I can’t suggest anything without the code of model but I would suggest you to check the input of Dropout layer.

Thanks … i think the problem happen when tried to delete the head layer from the model using this line

model = torch.nn.Sequential(*(list(model.children())[:-1]))

when i comment out this line i got the tensor of the image without problem but i need to remove the classification layer as i need the features before classification … what i should do, please

Can you please print the shape of the output feature from the model?

Is there any list operation in the forward method? Or ar you returning more than one value from any method? I am not sure that removing the head is creating the issue.

I think your previous (deleted) approach is ok. Can you try this?

class Identity(nn.Module):
    def __init__(self):
        super(Identity, self).__init__()
        
    def forward(self, x):
        return 

model.head = Identity()

See this answer .

Thanks but last question about this

inference_transform = T.Compose(
    [
        T.Resize(256),
        T.ToTensor(),
        T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),  # Imagenet
    ]
)

i got problem with size of images as i got

>  return torch._C._nn.gelu(input)
> RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 5.80 GiB total capacity; 3.83 GiB already allocated; 16.50 MiB free; 4.01 GiB reserved in total by PyTorch)

when i changed

T.Resize(256)

to

T.Resize((224,224))

another image have a problem . is there any setting for all images

In both cases, all images will be resized according to the provided image size. What issue are you facing with resize operation?

This

>  return torch._C._nn.gelu(input)
> RuntimeError: CUDA out of memory. Tried to allocate 2.00 MiB (GPU 0; 5.80 GiB total capacity; 3.83 GiB already allocated; 16.50 MiB free; 4.01 GiB reserved in total by PyTorch)

I put a counter for images so when I changed the number of resizing it keep going on but stop in some images

Can you please post your inference code or snippet? It shouldn’t give CUDA OOM error for resize operation. Also, can you try with smaller image size e.g., 64 and check whether you are getting CUDA OOM error?