How to properly do 10-crop testing on Imagenet?

This is in continuation of my previous post

I want to ask whats the proper way to do TenCrop testing for imagenet? I modified the val_loader in the imagenet example here as following

val_loader =
    datasets.ImageFolder(valdir, transforms.Compose([
        transforms.Lambda(lambda crops: torch.stack([transforms.ToTensor()(crop) for crop in crops])),
        transforms.Lambda(lambda crops: torch.stack([normalize(crop) for crop in crops])),
    batch_size=args.batch_size, shuffle=False,
    num_workers=args.workers, pin_memory=True)

I used rescaled images (256 x 256). So i modify the validate function in the above mentioned script as

    target = target.cuda(async=True)
    input_var = torch.autograd.Variable(input, volatile=True)
    target_var = torch.autograd.Variable(target, volatile=True)

    # compute output
    bs, ncrops, c, h, w = input_var.size()
    # compute output
    temp_output = model(input_var.view(-1, c, h, w))
    output = temp_output.view(bs, ncrops, -1).mean(1)
    loss = criterion(output, target_var)

I get the following exception

ValueError: Requested crop size (224, 224) is bigger than input size (220, 349)

Where am i going wrong? I am using tencrop transform as mentioned
here in docs.


Before transforms.TenCrop(224), you have to add transforms.Scale(256), some of your images are too small to be cropped to 224


Yes, works like a charm now. thanks :slight_smile:

I got confused with the following command

find . -name “*.JPEG” | xargs -I {} convert {} -resize “256^>” {}

i thought it results in squared images.