I have Unrecognized CachingAllocator option: 1024 error

Every time I use AI app such as Stable Diffusion, Language models or Voice changer etc. The following error occurs: Unrecognized CachingAllocator option: 1024

I have tried reinstalling apps, changing their configs but it doesn’s help at all. Also I have newest nvidia drivers.

I have Nvidia GeForce RTX 3060 with 12Gb VRAM. My platform is windows. All programmes work on python
I also installed NVIDIA CUDA Tollkit 12.3 as some people recommended to me.

For example Fooocus logs:

C:\Users\Roman\Downloads\Fooocus_win64_2-1-791>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.822
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\async_worker.py", line 25, in worker
    import modules.default_pipeline as pipeline
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\default_pipeline.py", line 1, in <module>
    import modules.core as core
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\core.py", line 1, in <module>
    from modules.patch import patch_all
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\patch.py", line 6, in <module>
    import fcbh.model_base
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_base.py", line 2, in <module>
    from fcbh.ldm.modules.diffusionmodules.openaimodel import UNetModel
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\openaimodel.py", line 16, in <module>
    from ..attention import SpatialTransformer
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\ldm\modules\attention.py", line 10, in <module>
    from .sub_quadratic_attention import efficient_dot_product_attention
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\ldm\modules\sub_quadratic_attention.py", line 27, in <module>
    from fcbh import model_management
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 114, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 83, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 769, in current_device
    _lazy_init()
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 298, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unrecognized CachingAllocator option: 1024

I am also getting this error. It’s weird because I got it to work on another pc with the same environment I have right now which is Ubuntu 20.04, cuda 12.1, torch 2.1.1, and python 3.10.13, but somehow it doesn’t work with my RTX 3060 TI with 8GB of vram. I got it to work on a RTX 2080 super. I uninstalled the previous cuda version I had by purging it and I installed the new one following official nvidia docs (I followed the deb local installation): https://developer.nvidia.com/cuda-12-1-0-download-archive

print(torch._C._cuda_getDeviceCount()) #outputs 1
And nvidia-smi says I am using cuda version 12.1 with driver version 530.30.02
Does anyone know what I can do?

@Kureshik I don’t know if cuda 12.3 is compatible with any torch versions yet since this cuda release is new. I was struggling to get it to work with torch and couldn’t. Maybe try downgrading to cuda 12.1 from the link I provided and reboot your pc and see if that resolves the issue.

Your locally installed CUDA toolkit won’t be used unless you build PyTorch from source or custom CUDA extensions, since the PyTorch binaries ship with their own CUDA runtime dependencies.
You would only need a properly installed NVIDIA driver.