Automatic handling of data copying to/from GPU?

Once that I specify that my model will operate in the GPU (i.e. model.cuda()), then I also have to convert the inputs and targets to CUDA with .cuda() and get the results back with .cpu(). Is there a way to just tell pytorch to copy the data to GPU when required, and bring back the results to main memory when needed?

1 Like

there is no such inbuilt method.

1 Like