Let’s say I have data with shape
N, and an Autoencoder that takes as input this type of data that are normalized, so the general steps are:
1- Data lies in the original domain
2- Data is normalized
3- Data passes through the Autoencoder
4- Reconstructed data lies in the normalized domain
5- Reconstructed data is “denormalized” in order to lie in the original domain.
The thing is I need to stay in the normalized domain during training. So, the loss is computed in the normalized domain: the Autoencoder learns to reconstruct the data with equal weights to the
N values. But some of those
N values are more important than other when lying in the original domain, so the Autoencoder should learn to reconstruct the data with different weights.
So my question is: how can I learn those weights?
Is it possible to make a “double” learning that would go this way:
- initialize two neural networks: an Autoencoder and another one with learnable parameters (the weights)
- train the Autoencoder. The loss is computed with weights from the other network
- when validating (validation is done in the original domain), use the results in the original domain to adjust the parameters corresponding to the weights
Or is there a simpler way of doing this?
I can’t find examples of code doing those kind of things. Does someone have some resources?
Thanks a lot for reading