I have 3 separate datasets lets say D1, D2, D3.
I want to train the same model on the 3 datasets. But as #samples(D1)>#samples(D2)>#samples(D3),
I want to train a model M1 on D1. After the training is complete I want to use these trained weights to initialize model M2 and train D2, and similarly for M3 and D3.
I am currently doing it like this
Make model 1
Train model 1
Make model 2
Train model 2
Make model 3
Train model 3
But this is degrading my results for Model 1 and Model 2. Does model2.load_state_dict(torch.load(path1)) change model1’s parameters? If yes, how can I solve my problem?