Hi,
I am trying to use the StyleGAN pretrained generator for inference from the psp.py
model here: GitHub - eladrich/pixel2style2pixel: Official Implementation for "Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation" (CVPR 2021) presenting the pixel2style2pixel (pSp) framework. The setup is as follows:
-
psp.py
in themodels
folder has two modules - an encoder and decoder. The decoder is a pretrained sytlegan generator. -
I initialize the model as
opts = Namespace(**opts)
net = pSp(opts, ngpu)
where ngpu = 2
in my code.
- I then perform
decoder = net.decoder.to(device)
if (device.type == 'cuda') and (ngpu > 1):
decoder = nn.DataParallel(decoder, list(range(ngpu)))
- I put the decoder in eval mode.
While actually running the inference though I keep getting CUDA out of memory errors which look like this:
RuntimeError: CUDA out of memory. Tried to allocate 2.25 GiB (GPU 0; 7.79 GiB total capacity;
3.57 GiB already allocated; 1.52 GiB free; 4.67 GiB reserved in total by PyTorch) If
reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.
See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
What I understand from this error is that the decoder is being loaded onto GPU:0 and running out of memory during inference.
Is there a workaround to this? Not sure why my code is not using GPU 1 as well.
Note: When I run the inference on the whole psp.py
model and not just the decoder, I don’t have any issues with CUDA memory errors. Not sure why the whole model seems to work but the decoder itself doesn’t.