A question on one of the usages of detach()

Hey,

In the dcgan tutorial from the pytorch website, in this piece of code that basically generates some evaluation every 500 steps:

        # Check how the generator is doing by saving G's output on fixed_noise
        if (iters % 500 == 0) or ((epoch == num_epochs-1) and (i == len(dataloader)-1)):
            with torch.no_grad():
                fake = netG(fixed_noise).detach().cpu()
            img_list.append(vutils.make_grid(fake, padding=2, normalize=True))

        iters += 1

Question is: why is the detach on the netG output needed?
From what I understand fake will be ran-over in the next iteration anyway, so the fake created in this piece of code will not be inside the computational graph.

Thanks!
Oran

It’s not really necessary, since the forward pass is also wrapped in a torch.no_grad() block.
Thus the model output shouldn’t even have a grad_fn you could detach.
However, it should be a no-op in this case anyway. At least I cannot think of a use case where this could be necessary.

thanks so much!

oran