i got the shape of the image like
torch.Size([1, 1, 3, 319, 256])
due to that, i got
B, C, H, W = x.shape
ValueError: too many values to unpack (expected 4)
my code is
inference_transform = T.Compose(
[
T.Resize(256),
T.ToTensor(),
T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]), # Imagenet
]
)
img = load_img("127535.jpg")
img = inference_transform(img.convert("RGB"))
img = img.to(DEVICE)
img = img.reshape((1,img.shape[0], img.shape[1], img.shape[2]))
img = img.unsqueeze(0)
print(img.shape)
output = model(img)
in which place I should change these dimensions ? or is there anything I can add to this code to make the dimensions right ?