Parallel Computation of Models in PyTorch

I am quite new to PyTorch, I have checked the discussion a lot but I could not find about the issue. I have 4 models which require around 6 GB memory on GPU but we have limited number of GPUs. So I want to parallelize all 4 models and I wonder if it is possible. I have thought about using all modules within a new module but I don’t know if the calculations would be parallel or sequential,

class combinedModel(nn.Module):
    def __init__(self):
        super(Model, self).__init__()

        self.model1 = Model1()
        self.model2 = Model2()
        self.model3 = Model3()

    def forward(self, x):
        out1 = self.model1(x)
        out2 = self.model2(x)
        out3 = self.model3(x)
        return out1, out2, out3

Given models don’t have any shared parameters and I am going to perform optimization for each of them seperately. How can I utilize my GPU memory instead of employing each model on seperate GPUs?