Inception v3 pre-trained model

I’m trying to train a pre-trained Inception v3 model for my task, which gives as input 178x178 images. It has 5 possible classes so I changed the fully-connected layer to have 5 output feature. My code is the following:


# Pre-trained models
model = models.inception_v3(pretrained=True)

### ResNet or Inception
classifier_input = model.fc.in_features
num_labels = 5

# Replace default classifier with new classifier
model.fc = nn.Linear(classifier_input, num_labels)
model.cuda()

However, I’m getting the following error: RuntimeError: Calculated padded input size per channel: (2 x 2). Kernel size: (5 x 5). Kernel size can’t be greater than actual input size.

I’m not sure what’s the problem as I adopted a similar strategy when training a pre-trained VGG and ResNet.

1 Like

Inception models expect an input of 299x299 spatial size, so your input might just bee too small for this architecture.

Changed the images size to 299x299 but now getting this error instead:

TypeError: max() received an invalid combination of arguments - got (InceptionOutputs, int), but expected one of:
 * (Tensor input)
 * (Tensor input, name dim, bool keepdim, tuple of Tensors out)
 * (Tensor input, Tensor other, Tensor out)
 * (Tensor input, int dim, bool keepdim, tuple of Tensors out)

The output of inception will now return an InceptionOutput, which will contain the .logits and .aux_logits, if specified.
If you don’t need the aus_logits, just use output.logits in your further processing.

1 Like

I tried doing this and have researched more but have not been successful.

My training code is the following:

def train_model(model, criterion, optimizer, scheduler, num_epochs=25):
    since = time.time()

    best_model_wts = copy.deepcopy(model.state_dict())
    best_acc = 0.0
    best_epoch = -1

    for epoch in range(num_epochs):
        print('Epoch {}/{}'.format(epoch, num_epochs - 1))
        print('-' * 10)

        # Each epoch has a training and validation phase
        for phase in ['train', 'val']:
            if phase == 'train':
                model.train()  # Set model to training mode
            else:
                model.eval()   # Set model to evaluate mode

            running_loss = 0.0
            running_corrects = 0

            # Iterate over data.
            for inputs, labels in dataloaders[phase]:
                if train_on_gpu:
                    inputs, labels = inputs.cuda(), labels.cuda()

                # zero the parameter gradients
                optimizer.zero_grad()

                # forward
                # track history if only in train
                with torch.set_grad_enabled(phase == 'train'):
                    outputs = model(inputs)
                    _, preds = torch.max(outputs, 1)
                    loss = criterion(outputs, labels)

                    # backward + optimize only if in training phase
                    if phase == 'train':
                        loss.backward()
                        optimizer.step()

                # statistics
                running_loss += loss.item() * inputs.size(0)
                running_corrects += torch.sum(preds == labels.data)
            if phase == 'train':
                scheduler.step()

            # deep copy the model
            if phase == 'val' and epoch_acc > best_acc:
                best_acc = epoch_acc
                best_model_wts = copy.deepcopy(model.state_dict())
                best_epoch = epoch

        print()

    time_elapsed = time.time() - since
    print('Training complete in {:.0f}m {:.0f}s'.format(
        time_elapsed // 60, time_elapsed % 60))
    print('Best val Acc: {:4f}'.format(best_acc))
    print('Best epoch: {:4f}'.format(best_epoch))

    # load best model weights
    model.load_state_dict(best_model_wts)
    return model

And I specifically altered this line:

outputs = model(inputs)

to

outputs = model(inputs).logits

But still getting the same error…

Did you get to solve this problem?

This solution worked;

model = models.inception_v3(pretrained=True)
model.aux_logits = False 

Found the solution here;