Assign gradient manually for adaptive clipping

I have two loss terms and I want to do an adaptive gradient clipping to prevent one of the two losses to prevail on the other.

By adaptive gradient clipping I mean the gradient among the two that is bigger is normalized by the other one.

after computing the normalized gradient I need to assign it, but then the code breaks

loss1 = ...
loss2 = ...
loss_tot = loss1 + loss2

model.optimizer.zero_grad()
loss_tot.backward(retain_graph=True)

gg1 = grad(loss1, model.encoder.parameters())
gg2 = grad(loss2, model.encoder.parameters())

# gradient clipping
for i, (g1, g2) in enumerate(zip(gg1, gg2)):

    # compute L2 norms
   g1_ = g1.norm(2)
   g2_ = g2.norm(2)
   g_ = torch.min(g1_, g2_)

   # normalize gradients
   gg2[i] = torch.mul(g_, torch.div(g2, g2_)) # --- NOT WORKING ---


model.optimizer.step()

I get:
TypeError: 'tuple' object does not support item assignment

I am new to Pytorch, so this solution may not be the best one.
Thanks in advance