"hybrid" mode of pytorch for cuda

For cuda version of pytorch, if there make model and optimizer .to() “cuda”, When run the code, the learning processing will completely take place in GPU, CPU will be unused.

It would be nice to add a “hybrid” option, which first use GPU, but if CPU is free, it will utilize CPU as much as possible.

I am wondering if this is possible?

You could manually push data to the CPU and let it perform some operations. Note however, that data transfers can be expensive and would most likely reduce the performance of your model. Also, the CPU is still executing the actual Python code and is launching the GPU workload, so it’s not completely idle,