Pytorch_cuda_alloc_conf

I understand the meaning of this command (PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:516), but where do you actually write it?
In jupyter notebook? In command prompt?

1 Like

export it as an env variable in your terminal and it should work.

Thank you for the answer, but now I am even more confused where should I write what. :slight_smile:

1 Like

You can set environment variables directly from Python:

import os

os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:516"

This must be executed at the beginning of your script/notebook.

3 Likes

Many thanks! :slight_smile:

Didn’t work. Still get CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.33 GiB already allocated; 0 bytes free; 5.34 GiB reserved in total by PyTorch)

First, use the method mentioned above.
in the linux terminal, you can input the command:

export PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:512

Second,
you can try --tile following your command.
"decrease the --tile

such as --tile 800 or smaller than 800"

eg.

python inference_realesrgan.py --tile 120

I’ve tried these 2 steps, and it works for me.
My Chinese blog: