How to deal with gpu out of memory problem with runing model on different gpus

I have 4 gpus and i use torch.nn.DataParallel with batch size of 4, meaning that each gpu has batch size of 1.
i want to increase my model but it will not be fit in one gpu, so i want to go with batch size of 2.
however it does not sounds like helping, because in this way just 2 gpus will be used.

is there a way that i reduce my batch size from 4 to 2 and then split my model and run part of my model on one gpu and part of it on another gpu?

I should have search first.
This sounds like helping link