Running model train cycle inside a function

Hi! I am trying to define a python function with learning rate as the only parameter, which would instantiate a pytorch model, train it using an optimizer with the passed learning rate, and then print out training statistics. My code looks like this:

def get_lr_performance(loc_lr):
    loc_model = nn.Sequential(
    #some modules
    )
    loc_opt = torch.optim.SGD(loc_model.parameters(), lr=loc_lr)
    train_model(loc_model, loc_opt, train_loader)

where train_model is defined as follows:

def train(model, optimizer, dataloader): 
    model.train()
    stats = defaultdict(list)
    for x, y in dataloader:
        optimizer.zero_grad()
        output = model(x)
        loss = F.nll_loss(output, y, reduction='sum')
        loss.backward()
        optimizer.step()
        # save the statistics of interest to stats
    return stats

def train_model(model, optimizer, train_loader, epochs=7):
    stats = defaultdict(list)
    for epoch in range(epochs):
        train_stats = train(model, opt, train_loader)
        plot_stats(train_stats) # a basic plotting function using plt

The training stats do not improve during training when calling get_lr_performance(loc_lr) at any loc_lr. However, executing

loc_model = nn.Sequential(
#some modules
)
loc_opt = torch.optim.SGD(loc_model.parameters(), lr=loc_lr)
train_model(loc_model, loc_opt, train_loader)

without the function trains the model as expected. I suppose this issue is somehow related to the pytorch optimizer (im)mutability, but I am not sure. Is there a way to make the function work properly?

You are passing opt to train, while train_model gets optimizer passed in.
Could opt refer to some global optimizer?

Hi, thank you for reply!
Yes, I actually had a global optimizer under the “opt” variable. Changing from

train_stats = train(model, opt, train_loader)

to

train_stats = train(model, optimizer, train_loader)

in train_model() fixed the issue.