Using main RAM instead of VRAM


How can one use their main desktop’s RAM instead of video board based (GPU) RAM?

Any decreases in speed is not a problem.

Is it a simple PyTorch (or other similar framework) code change, or is it simply not compatible?

Thank you

If you don’t push your model parameters and data to the GPU, all operations will be executed on the CPU by default and will use the RAM.

Thank you. Is there any way to run the model on the GPU, but store the data in main memory instead of VRAM?

Since the calculations should be performed on the GPU, you would have to push the parameters and data to the device at some point.
Are you running out of memory on the GPU?
If so, you could try to lower your batch size or use torch.utils.checkpoint to trade compute for memory.