Hi, I’m new to pytorch and had no luck following similar threads. I’m trying to jointly train two models in the same loop, and the model updates involve a different computation that takes in some combined loss from both model_a and model_b. However, I’m not sure how to go about training them at the same time. Any advice would be greatly appreciated!
self.optimiser_a.zero_grad()
loss_a = calc_loss_a(output_a, output_b)
loss_a.backward()
self.optimiser_a.step()
self.optimiser_b.zero_grad()
loss_b = calc_loss_b(output_a, output_b)
loss_b.backward()
self.optimiser_b.step()
The error I get from the above is
RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
From the advice in some threads, I tried using retain_graph=True, but received this error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [4, 10]], which is output 0 of TBackward, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).