I have a multi-output model which returns 3 outputs when doing forward pass
for data in train_loader: input = Variable(data.type(torch.cuda.FloatTensor)) out_a, out_b, out_c = multi_output_model(input)
I’d like to do some post-processing of the output, but because of limited-memory GPU I tried to design so that only the model inference happens in GPU (and other processing on CPU).
Is there a way to cast all of the model’s output directly to CPU (or vice versa)? I know that you can cast it directly after calling the model’s forward pass such as
device = torch.device('cpu') out = model(input).to(device) ### OR ### out = model(input).cpu()
But this can only be done with single output model. I hope to find a way I could do the same for multi-output model