I have a bash script in which I train and test several networks/models. Such as:
python model_code.py 1 python model_code.py 2 ... python model_code.py 10
The parameter value (1, 2, …) identifies a different type of CNN. I start with CNNs with less layers (e.g. ResNet-18) and goes on increasing the number of layers. I have noticed that as the networks/models are being called in, training is getting much slower than usual. In other words, if I run model 6 alone (not after 5 previous networks have been called), it trains faster. Has anyone experienced this problem? Is this procedure above (calling multiple networks in sequence in a script) not suitable?