This is in continuation of my previous post
I want to ask whats the proper way to do TenCrop testing for imagenet? I modified the val_loader in the imagenet example here as following
val_loader = torch.utils.data.DataLoader(
datasets.ImageFolder(valdir, transforms.Compose([
transforms.TenCrop(224),
transforms.Lambda(lambda crops: torch.stack([transforms.ToTensor()(crop) for crop in crops])),
transforms.Lambda(lambda crops: torch.stack([normalize(crop) for crop in crops])),
])),
batch_size=args.batch_size, shuffle=False,
num_workers=args.workers, pin_memory=True)
I used rescaled images (256 x 256). So i modify the validate function in the above mentioned script as
target = target.cuda(async=True)
input_var = torch.autograd.Variable(input, volatile=True)
target_var = torch.autograd.Variable(target, volatile=True)
# compute output
bs, ncrops, c, h, w = input_var.size()
# compute output
temp_output = model(input_var.view(-1, c, h, w))
output = temp_output.view(bs, ncrops, -1).mean(1)
loss = criterion(output, target_var)
I get the following exception
ValueError: Requested crop size (224, 224) is bigger than input size (220, 349)
Where am i going wrong? I am using tencrop transform as mentioned
here in docs.