If you’re trying to compute gradients for logvar, I don’t think Pytorch supports autograd with in-place operations. You’d have to change your in place ops to their non-in-place variants (ie, .exp_ to .exp, .add_ to .add.
Otherwise, the two look like the same to me, other than how you’re manually specifying a torch.cuda.FloatTensor in the second.
@Soumith_Chintala@apaszke Could you verify that’s correct? I don’t think what Richard said is correct because the vae example here uses inplace operations.
@johnwlambert, sorry, I was wrong. As long as you don’t modify the leaf node (logvar in this case) in-place autograd works (I tried your examples above).