RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad()

This line of code:

run_loss_disc+=loss_disc

looks wrong as you are accumulating the current discriminator loss including the entire computation graph.
If you want to accumulate the loss value only for printing purposes, use run_loss_dict += loss_dict.item() or .detach() the tensor.