Hi all,
I currently face a problem of using Autograd() in a distributed learning problem.
Imagine we have two identical networks separately in client and server. I calculate the loss from client side, and send it to server. Could I use the this loss to do the backpropogation in server side?
I know I need to build training graph in server side to do backpropogation, therefore I just use some fake data to do the forward pass to build the training graph, the pseudo code is as follow:
client:
loss_client = log(net(data_client))
send loss_client to server
Server:
fake_data
fake_output = log(net(fake_data))
fake_output.data = loss_client.data
optimizer.zero_grad()
fake_output.backward()
optimizer.step()
Current idea does not work, any help will be really appreciated!