Current implementation of volatile = True

Hi,

I am updating one of my older PyTorch scripts and I have a generative model with a noise input and my previous implementation was like this:

noise = Variable(noise, volatile=True)  # total freeze netG
y = Variable(netG(noise).data)
f_enc_Y_D, f_dec_Y_D = netD(y)

So, with the new torch.no_grad(), which one below is the correct implementation?

with torch.no_grad():
      noise = Variable(noise)
y = Variable(netG(noise))
f_enc_Y_D, f_dec_Y_D = netD(y)

or

with torch.no_grad():
      noise = Variable(noise)
      y = Variable(netG(noise))
      f_enc_Y_D, f_dec_Y_D = netD(y)

Or both result in the same graph?

Thanks!!!

1 Like

Hi,

You don’t need Variables anymore !
so just:

with torch.no_grad():
      y = netG(noise)
      f_enc_Y_D, f_dec_Y_D = netD(y)

And this will result in no graph at all !

2 Likes

Thanks for the answer @albanD!

I realized that since it is the Discriminator training step (netD), I think it should look like this to have gradients till netD but not netG:

with torch.no_grad():
      y = netG(noise)
f_enc_Y_D, f_dec_Y_D = netD(y)

If you want gradients only in netD, yes you want that !