I have a pair of models, A1 and B1 that I train on some data on date 1. How can i train a new pair A2 and B2 on date 2, such that the combinations (A1,B2) and (A2,B1) are guaranteed to fail?

Currently if i have some loss L(A,B) which i minimize, i update it as follows:

L → 2*L(A2,B2) - L(A1,B2) - L(A2,B1)

So when training A2 and B2, i load A1 and B1, freeze their weights and i ensure that their parameters are not added to the optimizer. So i have 4 models loaded in total, but only 2 are training.

That seems to work during training, but not during evaluation. Like training accuracy is fine but evaluation accuracy goes down the toilet. Is there a known way to do this?

For example, if you are training a new auto-encoder, given the same input, how can you ensure the code, i.e. the output of the encoder, is maximally different to the output of the first trained auto-encoder.

Ok, the whole thing about training accuracy being “good” and evaluation accuracy being “bad” is to do with labels. It’s like training a binary classifier where the first half of a batch is label 0 and the second half of the batch is label 1. You will find that training accuracy goes up, but evaluation accuracy is almost always close to 0. It’s the same kind of behavior here. You need to randomize your batch and therefore your labels. By using the loss L → 2*L(A2,B2) - L(A1,B2) - L(A2,B1), I’m passing all “good” labels in one batch and all “bad” labels in another. So the workaround is to have another loss function where you can minimize the following L = L1(randperm(cat(A1,A2)), B2) + L2(A2,randperm(cat(B1,B2)). Something like that. You need to randomize the batch when concatenating A1 and A2, or B1 and B2. You need to appropriately define L1 and L2 for your problem.

So having done all of that, you can train (A2,B2) to be maximally different to (A1,B1). And likewise you can train (A3,B3) to be maximally different to (A2,B2). But if you only ever load 4 models, 2 of which are frozen and the other 2 are training, there is no guarantee that (A3,B3) is maximally different to (A1,B1). Your models could be theoretically oscillating between two local minima. And there is a limit to how many models you can load at once. In theory you could load all of (A1,B1), (A2,B2) and (A3,B3) and make (A3,B3) different to the last two pairs, but not everyone has 24GB of GPU memory.

So, …, it would be nice if there was an easy and computationally friendly way to guarantee different models every time you trained them. What is it?