Hello, I’m using a variational autoencoder in a project and I wanted to ask some opinions about few issues:
-
My inputs are made by the concatenation of two sparse vectors which (independently) sum to one (So they belong to some $D$-dimensional simplex).
-
I’m not sure about which loss function might be suitable to reconstruct these kind of data, I’ve actually read that
torch.nn.CosineLossEmbedding()
could be a good choice but I’m not seeing noticeable results for now. -
I’ve tried to eliminate biases from all my linear layers.
-
As my input consists of two separate vectors, could it make sense to provide a stack of different encoders/decoders to the architecture (2 in this case) to be sure that they are somehow processed separately?