Reparameterization trick backprop

Will PyTorch backprop through logvar equivalently in the following two statements?

def vae_reparameterize1( mu, logvar):
      std = logvar.mul(0.5).exp_()
      eps = Variable(std.data.new(std.size()).normal_())
      return eps.mul(std).add_(mu)

def vae_reparametrize2( mu, logvar):
      std = logvar.mul(0.5).exp_()
      eps = torch.cuda.FloatTensor(std.size()).normal_() 
      eps = Variable(eps)
      return eps.mul(std).add_(mu), std

If you’re trying to compute gradients for logvar, I don’t think Pytorch supports autograd with in-place operations. You’d have to change your in place ops to their non-in-place variants (ie, .exp_ to .exp, .add_ to .add.

Otherwise, the two look like the same to me, other than how you’re manually specifying a torch.cuda.FloatTensor in the second.

@Soumith_Chintala @apaszke Could you verify that’s correct? I don’t think what Richard said is correct because the vae example here uses inplace operations.

@johnwlambert, sorry, I was wrong. As long as you don’t modify the leaf node (logvar in this case) in-place autograd works (I tried your examples above).