How to test the model on multiple GPUs? Not training, just testing on a single image

I have trained a model, and I’d like to test this model on some datasets.
May the model is too large(40 layers with too many parameters), or maybe the test image is too large, or maybe both.
So now, unfortunately, I cannot test the model on some images since the single GPU has not enough memory(11G).
But I have 4 GPUs, so I’m very hopeful that there will be some ways so that I can test the model on multiple GPUs. Thanks a lot!

Did the model fit into your GPU in training?
If so, it’s strange that it doesn’t fit anymore in the evaulaution.

To save some memory you could set your Variables to volatile=True.
By this, the intermediate results of the forward pass won’t be saved anymore, which are needed for your backward pass (weight updates).

Anyway, if this does not solve your problem, you could shard your model on different GPUs like this:

class MyModel(nn.Module):
def __init__(self, split_gpus):
    self.large_submodule1 = ...
    self.large_submodule2 = ...

    self.split_gpus = split_gpus
    if split_gpus:
        self.large_submodule1.cuda(0)
        self.large_submodule1.cuda(1)

def forward(self, x):
    x = self.large_submodule1(x)
    if split_gpus:
        x = x.cuda(1) # P2P GPU transfer
    return self.large_submodule2(x)
2 Likes

Yes! As @ptrblck said, make sure that you are not tracking unnecessary variables for backward. I’m supposing that you are testing a larger image/data than training data. But if each single slice of your network output can be fit into a GPU, this is certainly doable.

1 Like

Thank for your kind and quick help!
It did work!

Thanks again!