Using both CPU and GPU during training to avoid CUDA out of memory

Is it possible to use both CPU and GPU during training to avoid bellow error ?

CUDA error: out of memory

I have CPU: 32G RAM and GPU: 8G RAM.
When I train on smaller network with batch size =4 , it is OK. But I can not increase batch size, because it faces CUDA out of memory. In this case, it uses just 20% of CPU and all GPU capacity.
Is it possible to set data in CPU and model in GPU? How ??

Training on a CPU would be very slow if you’re doing deep learning stuff. Perhaps, you might want to try some other approaches such as mixed-precision training ?

I mean the model trains in GPU and also loss calculated in GPU, but reading data and batches are in CPU?
I don’t know about mixed-precision training, What is it used for ?
I just want to solve error CUDA out of memory to train with larger batch size than 4 !