Backpropagate with a given loss from another model

Hi all,

I currently face a problem of using Autograd() in a distributed learning problem.
Imagine we have two identical networks separately in client and server. I calculate the loss from client side, and send it to server. Could I use the this loss to do the backpropogation in server side?

I know I need to build training graph in server side to do backpropogation, therefore I just use some fake data to do the forward pass to build the training graph, the pseudo code is as follow:


loss_client = log(net(data_client))
send loss_client to server


fake_output = log(net(fake_data)) =


Current idea does not work, any help will be really appreciated!

what happens if you just use loss_client as the backward() parameter and just go


The training still passes, but the result is not correct comparing to train locally. I think only change the loss is not possible to change the gradient, the gradient is input-related. So until now, I didn’t find a solution for this problem