Transfer learning with inception model

hi,
can anyone please explain to me how to understand to the final layer for finetuning the convnet. i am following blow tutorial and need use for inception_v3 as trained model
https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html
ones, i have changed input image size and want to know how to modify below code instead of resnet18

model_ft = models.inception_v3(pretrained=True)
num_ftrs = model_ft.fc.in_features
model_ft.fc = nn.Linear(num_ftrs, 2)

model_ft = model_ft.to(device)

criterion = nn.CrossEntropyLoss()

# Observe that all parameters are being optimized
optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)

# Decay LR by a factor of 0.1 every 7 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
model_ft = models.inception_v3(pretrained=True) 
# Here you are loading the state_dict or weights and biases of the pre-trained inception network
num_ftrs = model_ft.fc.in_features 

# in_features are the inputs for the linear layer
# fc means fully connected layer

model_ft.fc = nn.Linear(num_ftrs,2)
# here 2 is your number of classes (ants and bees)

model_ft = model_ft.to(device)
# device is either CPU or GPU
# So, here it is uploading the model to CPU or GPU for processing

criterion = nn.CrossEntropyLoss()
# Calculates the loss

optimizer_ft = optim.SGD(model_ft.parameters(), lr=0.001, momentum=0.9)
# lr = learning rate and implements the optimization algorithms (here SGD)
# To compute gradients

exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=7, gamma=0.1)
# used after calculating the gradient

You can read about Inception v3 in this [Link]

i have changed as mention in above, then have got this error
image

This seems to be the same problem as described here.