Hello! I’m trying to adjust a generator’s parameters to maximize the target model: “f”'s cross-entropy loss, and minimize the discriminator’s loss: “loss G”
for the Discriminator, I’m trying to adjust its parameters to minimize lossD
In the process, I’m getting an error from using backward twice.
Here is the code block:
# Train Generator: min log(1 - D(G(z))) <-> max log(D(G(z))
adv_ex = adv_ex.reshape(32, 28*28)
output = disc(adv_ex) #discriminator decides if advex is real or fake
lossG = torch.mean(torch.log(1. - output)) #get loss for gen's desired desc pred
adv_ex = adv_ex.reshape(-1,1,28,28)
f_pred = target(adv_ex) #.size() = [32, 10]
f_loss = -CE_loss(f_pred, labels) #add loss for gens desired f pred
loss_G_Final = f_loss+lossG # can change the weight of this loss term later
opt_gen.zero_grad()
loss_G_Final = loss_G_Final.to(device)
loss_G_Final.backward()
opt_gen.step()
# Train Discriminator: max log(D(x)) + log(1 - D(G(z)))
adv_ex = adv_ex.reshape(32, 784)
disc_real = disc(real).view(-1)
disc_fake = disc(adv_ex).view(-1)
lossD = -torch.mean(torch.log(disc(real)) + torch.log(1. - disc(adv_ex)))
# can decide later how much that loss term weighs
opt_disc.zero_grad()
lossD.backward()
opt_disc.step()
Here is the error code:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
/tmp/ipykernel_13305/405152453.py in <module>
45
46 opt_disc.zero_grad()
---> 47 lossD.backward()
48 opt_disc.step()
49
~/.conda/envs/mypytorch19/lib/python3.9/site-packages/torch/_tensor.py in backward(self, gradient, retain_graph, create_graph, inputs)
253 create_graph=create_graph,
254 inputs=inputs)
--> 255 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
256
257 def register_hook(self, hook):
~/.conda/envs/mypytorch19/lib/python3.9/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
145 retain_graph = create_graph
146
--> 147 Variable._execution_engine.run_backward(
148 tensors, grad_tensors_, retain_graph, create_graph, inputs,
149 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag
RuntimeError: Trying to backward through the graph a second time (or directly access saved variables after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved variables after calling backward.