Set torch.device to all outputs from multi-output model altogether


I have a multi-output model which returns 3 outputs when doing forward pass

for data in train_loader:
    input = Variable(data.type(torch.cuda.FloatTensor))
    out_a, out_b, out_c = multi_output_model(input)

I’d like to do some post-processing of the output, but because of limited-memory GPU I tried to design so that only the model inference happens in GPU (and other processing on CPU).

Is there a way to cast all of the model’s output directly to CPU (or vice versa)? I know that you can cast it directly after calling the model’s forward pass such as

device = torch.device('cpu')
out = model(input).to(device)
### OR ###
out = model(input).cpu()

But this can only be done with single output model. I hope to find a way I could do the same for multi-output model

You would have to call the cpu() or to('cpu') operation on each tensor separately either inside the model in its forward method or outside of the model, as you cannot use it on a tuple.

I see, thank you very much for the suggestion