How do you train an encoder-decoder pair such that they are exclusive? Specifically, I would like 2 things:
when you train 2 pairs of encoder-decoder networks, you cannot mix the encoder and decoder from different pairs
if you have a pretrained encoder, you cannot retrain a decoder from scratch and likewise the other way round. I.e you can only train them together end-2-end
I’m not really sure what you’re looking for, so my reply might be a bit off.
Regarding your first point, I would argue this is the default situation. Let’s assume you use the exact same encoder-decoder architecture to train two models using the exact same dataset. If both models are initialized randomly and trained with randomized batches, the loss for both models is likely to converge to different minima. Thus, say, then using the encoder of Model 1 and the decoder of Model 2 should yield subpar results. Of course, you can still do it.
Regarding your second point, this seems to contradict the idea of pretraining and transfer learning. If you have a pretrained encoder-decoder model, and then use the pretrained encoder to train a new decoder, I cannot see why the new decoder – at least in principle – might not converge to pretrained decoder (i.e., have more or less the same weights in the end). Of course, the new decoder might also be very different but also yield equally good results.
So let’s imagine you’re trying to build NN encryption. This is not my use-case but let’s entertain it for now. (i don’t think a neural network could ever provide a cryptographically secure algorithm…) Let’s imagine your encoder is the encryption key and the decoder is the decryption key. Because you can retrain your decoder from scratch from a frozen encoder, and likewise the other way round, if you intercept the encryption key, you can rebuild your decryption key. Likewise the other way round. This is not great. So you have to treat them as the same key. So you end up with something more similar to symmetric encryption like AES. However, I want something like asymmetric key encryption with a public and private key. You cannot “retrain” a private key from a public key. So I want to train an encoder and decoder such that, once trained, if i intercept the encoder, there is no way to retrain the decoder. And ideally the same the other way round.
The only thing I could think of was to introduce non-differentiable layers, which you could somehow train end-to-end the first time round, but not easily the second time round.
Does this make sense?