Reduce LR on plateau based on training loss or validation?

as the tittle says, in what loss phase should i use the scheduler.step()? is it the training or the validation?

Deciding whether to reduce the LR based on the training loss or validation loss is a matter of experimentation and depends on the specific problem and dataset. Both approaches can be valid, but basing the LR reduction on the validation loss is common.

By monitoring the validation loss, you can better estimate the model’s performance on unseen data. Reducing the LR when the validation loss plateaus can help the model converge better and avoid overfitting.

import torch
import torch.nn as nn
import torch.optim as optim
from torch.optim.lr_scheduler import ReduceLROnPlateau

# Define your model and loss function
model = MyModel()
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(model.parameters(), lr=0.1)

# Define the learning rate scheduler
scheduler = ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=3)

# Training loop
for epoch in range(num_epochs):
    train_loss = 0.0
    valid_loss = 0.0

    # Training phase
    model.train()
    for batch in train_data:
        inputs, labels = batch
        optimizer.zero_grad()
        outputs = model(inputs)
        loss = criterion(outputs, labels)
        loss.backward()
        optimizer.step()
        train_loss += loss.item()

    # Validation phase
    model.eval()
    with torch.no_grad():
        for batch in valid_data:
            inputs, labels = batch
            outputs = model(inputs)
            loss = criterion(outputs, labels)
            valid_loss += loss.item()

    train_loss /= len(train_data)
    valid_loss /= len(valid_data)

    # Step the learning rate scheduler based on the validation loss
    scheduler.step(valid_loss)

    # Rest of the training loop... 
1 Like

thankyou for your reply :smile: