Train and test a model in parallel

As we know, in PyTorch, model.train() and model.eval() are needed when training or testing. Can we train and test a model in parallel?For example, training data (batch_size=128) and testing data(batch_size=16) is concat and a dataloader 128+16 is gotten. And the training loss computation is called, for the first 128 samples and the testing loss is computed based on the last 16 samples? Is this a good way to do NAS or AutoML?