Permute error number of dims don't match error

While running inference on my models,I ran into the following error

Runtime error number of dims don't match in permute

This is what I was doing

    val_transforms = T.Compose([
        T.Resize((256, 256)),
        T.CenterCrop((224, 224)),
        T.ToTensor(),
        T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
    ])
    img_name=cfg.TEST.IMAGE
    img=cv2.imread(img_name)
    img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    if (isinstance(img,np.ndarray)):
        img=Image.fromarray(img)
    
    plt.imshow(img)
    img=val_transforms(img)

The shape of img is [3,224,224]
Next I passed it to the model as follows

model=Model()#Model for my purpose transformer VIT small_patch_16-224
with torch.no_grad():
img=img.to(device).squeeze()
probs=model(img)
"""
apply softmax on probs to get result
"""
   ps = ps.cpu().data.numpy().squeeze()
   #ps is the output after applying softmax on probs 
    image = image.permute(1,2,0)#Runtime error generated at this point
    mean = torch.FloatTensor([0.485, 0.456, 0.406])
    std = torch.FloatTensor([0.229, 0.224, 0.225])
    
    
    image = image*std + mean
    img = np.clip(image,0,1)
    
    fig, (ax1, ax2) = plt.subplots(figsize=(8,12), ncols=2)
    ax1.imshow(img)

At first,I thought that because img has changed shape to [1,3,224,224] ,so that’s the reason why this is happening
So I did img.squeeze() before applying permute but that didn’t change the outcome.
So I changed my code as follows and that resolved the issue.
The new code

img_name=cfg.TEST.IMAGE
    img=cv2.imread(img_name)
    img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
    if (isinstance(img,np.ndarray)):
        img=Image.fromarray(img)
    
    plt.imshow(img)
    img_1=val_transforms(img)
   output_prob=[]
   with torch.no_grad():
      prob=model(img_1.to(device).unsqueeze(0))
     output_prob.append(prob)
     output_prob=torch.cat(output_prob,dim=0)
     ps=nn.Softmax(output_prob,dim=1)

Any ideas why the second code version doesn’t raise the permute dimension mismatch error.
As far as I could understand both versions are the same

Can you print the shape of image? It’s most likely 2D, and hence permuting over 3 dimensions isn’t possible. If you’re using greyscale images, you’ll want to include a channel dimension when you reshape. So the shape goes from [256,256] -> [1,256,256].

Shape of the image is 224x224x3.
As shown in

T.CenterCrop((224, 224)),

The first version of the code gave the permute dimension mismatch error but the second one didn’t.

So if you print image.shape before this line ^, what is the shape? Just to check it is 3D and not somehow 2D.

Before that line the shape is (1,3,224,224).After applying squeeze the shape is (3,224,224).
I will check and revert back
image shape before permute

torch.Size([3, 224, 224])

not sure what is the error here. Because I ran the same code right now,and now it’s working.

Probably applying torch.squeeze resolved the shape issue which was causing your code to previously fail.

The squeeze operation gets rid of any single dimensions. E.g. [1,3,224,224].squeze()[3,224,224]

So, you need to do .unsqueeze(0) before passing the img to your model and do .squeeze() before permute