Hello,
I am making an architecture with 2 Autoencoders. The input is passed to all auto encoders but only the autoencoder with the best (lowest) reconstruction is considered in the loss function. This should result in gradients update for that one autoencoder, no? How can I check the gradient flow in PyTorch for such a model? I have other things I want to check (e.g. are my gradients vanishing). Is PyTorch set up in a different way, such that I will need to use autograd to achieve this? ( I would rather not have to change anything with autograd)
I am new and have been looking around but have not found anything.
Thank you!