Resetting the learning rate, optimiser and model states while training for every fold in cross validation

Hi Everyone,
I am trying to use cross-validation for an image classification project.
The learning rate, optimiser and model states have to be reset in every fold, but all the above parameters are continuing from the previous folds state.
I am not able to figure out how to resolve this error -
Below is my structure for training loop -

model_transfer = EfficientNet.from_pretrained('efficientnet-b7')
n_inputs = model_transfer._fc.in_features
model_transfer._fc = nn.Linear(n_inputs, 4)
optimizer = torch.optim.SGD(model_transfer.parameters(), lr=3e-1)
criterion_transfer = nn.CrossEntropyLoss()
scheduler = ReduceLROnPlateau(
            optimizer,
            patience=4,
            threshold=1,
            factor=0.1,
            mode='max',
            verbose=True
        )

for fold in range(5):
    print("Fold: ", fold)
    partition, labels = kfold(df_train, df_test, splits, fold, stratify_columns='labels') # noqa
    train_loader = "DataLoader containing training set"
    val_loader = "DataLoader containing validation set"
    test_loader = "DataLoader containing test set"
    data_transfer = {'train': train_loader,
                     'valid': val_loader,
                     'test': test_loader
                     }
    train_model(model=model_transfer, loader = data_transfer, optimizer = optimizer, criterion = criterion_transfer,scheduler=scheduler, n_epochs = 50,checkpoint_path='./models/checkpoint_600.pt', fold_no=fold) # noqa

Try using the first block of code within the for loop. It should re-initialize everything per fold.

1 Like