Use RAM as extra memory for training with GPU

I’ve seen it’s possible to train with a hybrid system of gpu-cpu but in the end cpu do bottleneck
I would like to know if it’s possible to use the standard RAM as additional memory which can be used by the GPU. Of course the main idea would be computing everything with the GPU.

If you are able to make your own autograd.Function, you can store things on CPU:

class StorethingsOnCPU(torch.autograd.Function):
    def forward(ctx, inp):
        #...some calculation here...
        res = inp
        ctx._device = inp.device
        return res
    def backward(ctx, grad_out):
        inp_cpu, = ctx.saved_tensors
        inp = the backward here...
        print (inp_cpu, inp)
        grad_in = grad_out
        return grad_in

I don’t think there is a programmatic way to access the backwards of arbitrary built-in functions, though. At worst - and at the expense of a lot of time - you could re-do the forward on inp.detach().requires_grad_(), do a backward and return inp.grad.

Best regards