While running inference on my models,I ran into the following error
Runtime error number of dims don't match in permute
This is what I was doing
val_transforms = T.Compose([
T.Resize((256, 256)),
T.CenterCrop((224, 224)),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
img_name=cfg.TEST.IMAGE
img=cv2.imread(img_name)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
if (isinstance(img,np.ndarray)):
img=Image.fromarray(img)
plt.imshow(img)
img=val_transforms(img)
The shape of img is [3,224,224]
Next I passed it to the model as follows
model=Model()#Model for my purpose transformer VIT small_patch_16-224
with torch.no_grad():
img=img.to(device).squeeze()
probs=model(img)
"""
apply softmax on probs to get result
"""
ps = ps.cpu().data.numpy().squeeze()
#ps is the output after applying softmax on probs
image = image.permute(1,2,0)#Runtime error generated at this point
mean = torch.FloatTensor([0.485, 0.456, 0.406])
std = torch.FloatTensor([0.229, 0.224, 0.225])
image = image*std + mean
img = np.clip(image,0,1)
fig, (ax1, ax2) = plt.subplots(figsize=(8,12), ncols=2)
ax1.imshow(img)
At first,I thought that because img has changed shape to [1,3,224,224] ,so that’s the reason why this is happening
So I did img.squeeze() before applying permute but that didn’t change the outcome.
So I changed my code as follows and that resolved the issue.
The new code
img_name=cfg.TEST.IMAGE
img=cv2.imread(img_name)
img=cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
if (isinstance(img,np.ndarray)):
img=Image.fromarray(img)
plt.imshow(img)
img_1=val_transforms(img)
output_prob=[]
with torch.no_grad():
prob=model(img_1.to(device).unsqueeze(0))
output_prob.append(prob)
output_prob=torch.cat(output_prob,dim=0)
ps=nn.Softmax(output_prob,dim=1)
Any ideas why the second code version doesn’t raise the permute dimension mismatch error.
As far as I could understand both versions are the same