For cuda version of pytorch, if there make model and optimizer .to()
“cuda”, When run the code, the learning processing will completely take place in GPU, CPU will be unused.
It would be nice to add a “hybrid” option, which first use GPU, but if CPU is free, it will utilize CPU as much as possible.
I am wondering if this is possible?