Two Deep Models in single gpu - Train/Predict

I am relatively very new to DL. I am thinking of implementing a code like below
… predict using a pretrained model and output is an image. These images are fetched as frames from videos
… use the predicted output to train another model

I have only one GPu 8gb. I am wondering is there a way to do data fetching and training together on a GPU without bottleneck. Is there a way to optimitze them.

However I tried to save individual frames and also tensor.pt file. Both seems to take more time for preprocessing and training. I am using pytorch and any suggestion would be greatly helpful. Thanks

I suggest getting the result from the first model first, then use the results to train the second model.

1 Like