Use RAM as extra memory for training with GPU

Hi,
I’ve seen it’s possible to train with a hybrid system of gpu-cpu but in the end cpu do bottleneck
I would like to know if it’s possible to use the standard RAM as additional memory which can be used by the GPU. Of course the main idea would be computing everything with the GPU.

If you are able to make your own autograd.Function, you can store things on CPU:

class StorethingsOnCPU(torch.autograd.Function):
    @staticmethod
    def forward(ctx, inp):
        #...some calculation here...
        res = inp
        ctx.save_for_backward(inp.cpu())
        ctx._device = inp.device
        return res
    @staticmethod
    def backward(ctx, grad_out):
        inp_cpu, = ctx.saved_tensors
        inp = inp_cpu.to(ctx._device)
        #...do the backward here...
        print (inp_cpu, inp)
        grad_in = grad_out
        return grad_in

I don’t think there is a programmatic way to access the backwards of arbitrary built-in functions, though. At worst - and at the expense of a lot of time - you could re-do the forward on inp.detach().requires_grad_(), do a backward and return inp.grad.

Best regards

Thomas