Updating weight for a list of model

for model in clinet_models:
optimizer = torch.optim.SGD(model.parameters(),lr = 0.1)
for batch_idx,(inputs,target) in enumerate(test_loader):
outputs = model(inputs)
loss_fn = nn.CrossEntropyLoss()
loss_ = loss_fn(outputs,target.squeeze())
optimizer.zero_grad()
loss_.backward()
optimizer.step()

Hello, I am trying to update model’s parameters for a list of model, but it seems it keeps the same after this, how can I properly update the parameters?

The loop looks correct to me, did you make sure your parameters require_grad=True? Otherwise a small reproducible code snippet would probably be helpful others to look more into this.

Thanks for the help,I’m just started using PyTorch not sure how to set it, I want to mimic federated learning in one machine. Here is my complete code for this task

def train_local_model(model,dataset,loss_fn,optimizer):
model.train()
dataloader = DataLoader(dataset,batch_size = batch_size,shuffle = True)
for batch_idx,(inputs,targets) in enumerate(dataloader):
outputs = model(inputs)
loss = loss_fn(outputs,targets.squeeze())
optimizer.zero_grad()
loss.backward()
optimizer.step()

clinet_models = [copy.deepcopy(global_model) for i in range(num_clients)]

for epoch in range(num_epochs):
for client_model,client_datset in zip(clinet_models,client_datasets):
train_local_model(client_model,client_datset,loss_fn,optimizer) It turns out this does not update the parameters

Well I checked, the parameters of the models have require_grad = True