CUDA out of memory for StyleGAN pretrained generator

Hi,

I am trying to use the StyleGAN pretrained generator for inference from the psp.pymodel here: GitHub - eladrich/pixel2style2pixel: Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework. The setup is as follows:

  1. psp.py in the models folder has two modules - an encoder and decoder. The decoder is a pretrained sytlegan generator.

  2. I initialize the model as

opts = Namespace(**opts)
net = pSp(opts, ngpu)

where ngpu = 2 in my code.

  1. I then perform
decoder = net.decoder.to(device)

if (device.type == 'cuda') and (ngpu > 1):
    decoder = nn.DataParallel(decoder, list(range(ngpu)))
  1. I put the decoder in eval mode.

While actually running the inference though I keep getting CUDA out of memory errors which look like this:

RuntimeError: CUDA out of memory. Tried to allocate 2.25 GiB (GPU 0; 7.79 GiB total capacity; 
3.57 GiB already allocated; 1.52 GiB free; 4.67 GiB reserved in total by PyTorch) If 
reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

What I understand from this error is that the decoder is being loaded onto GPU:0 and running out of memory during inference.

Is there a workaround to this? Not sure why my code is not using GPU 1 as well.

Note: When I run the inference on the whole psp.py model and not just the decoder, I don’t have any issues with CUDA memory errors. Not sure why the whole model seems to work but the decoder itself doesn’t.

nn.DataParallel will add a memory overhad on the default device as described in e.g. this blog post. The recommended approach is thus to use DistributedDataParallel with a single process per GPU.

Thanks, will look into that ! Although I cannot understand why this error is only happening when I separately call net.decoder() and not when I run net() as a whole. Would you happen to have some insight?