Effect of the weights?

learnable_params = list(model1.parameters)) + list(model2.parameters))
optimizer = optim.Adam(learnable_params, lr=0.01, betas=(0.9,0.999))
...
optmizer.zero_grad()
...
total_loss = model1_loss + model2_loss
total_loss.backward()
optimizer.step()

If I want to decrease effect of the model1 in training process, What should I change?
Is this work?

total_loss = model1_loss *0.5 + model2_loss