Is the output reparameterize in Variational Autoencoder must be tensor?

As written in pytorch-example, the reparameterize function will output sample in hidden space. Finally, it will be a part of loss_funtion to be updated. The final loss will be 105. I have two questions below:

  1. the reparameterize function doesn’t contain any tensor parameters to be updated. But when I change the output of it, namely z, into z.detach() before input into decode, the loss will be 200 and nearly few descent.So why?
  2. The loss is consists of two parts: BCE(the reconstruction loss between x and new x) and KLD (the divergence between posterior and prior). So the BCE will update the params of decoder, while the KLD will update the params of encoder. In my option, z.detach() will not infleunce the update of decoder params. I don’t know if it is true.

Dear pytorchians, I hope you can write down your ideas~ thanks~