Pytorch_cuda_alloc_conf

I understand the meaning of this command (PYTORCH_CUDA_ALLOC_CONF=max_split_size_mb:516), but where do you actually write it?
In jupyter notebook? In command prompt?

export it as an env variable in your terminal and it should work.

Thank you for the answer, but now I am even more confused where should I write what. :slight_smile:

1 Like

You can set environment variables directly from Python:

import os

os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:516"

This must be executed at the beginning of your script/notebook.

2 Likes

Many thanks! :slight_smile:

Didn’t work. Still get CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 6.00 GiB total capacity; 5.33 GiB already allocated; 0 bytes free; 5.34 GiB reserved in total by PyTorch)