Hey,
In the dcgan tutorial from the pytorch website, in this piece of code that basically generates some evaluation every 500 steps:
# Check how the generator is doing by saving G's output on fixed_noise
if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
with torch.no_grad():
fake = netG(fixed_noise).detach().cpu()
img_list.append(vutils.make_grid(fake, padding=2, normalize=True))
iters += 1
Question is: why is the detach on the netG output needed?
From what I understand fake will be ran-over in the next iteration anyway, so the fake created in this piece of code will not be inside the computational graph.
Thanks!
Oran