Custom Back-propagation gradient update

Hi, I just started to use Pytorch and wanted to know if there’s any thing wrong with my approach.
I’ll briefly explain the problem I have and how I have decided to solve it.

Neural network: M
Data: A labels: a loss: l_a (Cross-entropy), gradient of loss w.r.t parameters: G_a
Data: B labels: b loss: l_b (Cross-entropy), gradient of loss w.r.t parameters: G_b
learning rate: lr

Algorithm I want to implement goes like this:

  1. Forward-pass A to obtain M(A)
  2. Compute l_a and correspondingly G_a using M(A)
  3. Forward-pass B to obtain M(B)
  4. Compute l_b and correspondingly G_b using M(B)
  5. Some computations based on G_a and G_b to obtain G_c
  6. Update M_0 w.r.t G_c

My solution:

G_a = [ ]
G_b = [ ]
M.zero_grad()
outputs = net(A)
loss_value_A = l_a(a, outputs)
loss_value_A.backward()
for f in M.parameters():
    G_a.append(f.grad.data)

M.zero_grad()
outputs = net(B)
loss_value_B = l_b(b, outputs)
loss_value_B.backward()
for f in M.parameters():
    G_b.append(f.grad.data)

G_c = some_computation(G_a, G_b)
M.zero_grad()
for f in M.parameters():
    f.data.sub_(lr * f.grad.data for f in G_c)

Thanks.