I have Unrecognized CachingAllocator option: 1024 error

Every time I use AI app such as Stable Diffusion, Language models or Voice changer etc. The following error occurs: Unrecognized CachingAllocator option: 1024

I have tried reinstalling apps, changing their configs but it doesn’s help at all. Also I have newest nvidia drivers.

I have Nvidia GeForce RTX 3060 with 12Gb VRAM. My platform is windows. All programmes work on python
I also installed NVIDIA CUDA Tollkit 12.3 as some people recommended to me.

For example Fooocus logs:

C:\Users\Roman\Downloads\Fooocus_win64_2-1-791>.\python_embeded\python.exe -s Fooocus\entry_with_update.py
Already up-to-date
Update succeeded.
[System ARGV] ['Fooocus\\entry_with_update.py']
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec  6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Fooocus version: 2.1.822
Running on local URL:  http://127.0.0.1:7865

To create a public link, set `share=True` in `launch()`.
Exception in thread Thread-2 (worker):
Traceback (most recent call last):
  File "threading.py", line 1016, in _bootstrap_inner
  File "threading.py", line 953, in run
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\async_worker.py", line 25, in worker
    import modules.default_pipeline as pipeline
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\default_pipeline.py", line 1, in <module>
    import modules.core as core
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\core.py", line 1, in <module>
    from modules.patch import patch_all
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\modules\patch.py", line 6, in <module>
    import fcbh.model_base
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_base.py", line 2, in <module>
    from fcbh.ldm.modules.diffusionmodules.openaimodel import UNetModel
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\ldm\modules\diffusionmodules\openaimodel.py", line 16, in <module>
    from ..attention import SpatialTransformer
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\ldm\modules\attention.py", line 10, in <module>
    from .sub_quadratic_attention import efficient_dot_product_attention
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\ldm\modules\sub_quadratic_attention.py", line 27, in <module>
    from fcbh import model_management
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 114, in <module>
    total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\Fooocus\backend\headless\fcbh\model_management.py", line 83, in get_torch_device
    return torch.device(torch.cuda.current_device())
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 769, in current_device
    _lazy_init()
  File "C:\Users\Roman\Downloads\Fooocus_win64_2-1-791\python_embeded\lib\site-packages\torch\cuda\__init__.py", line 298, in _lazy_init
    torch._C._cuda_init()
RuntimeError: Unrecognized CachingAllocator option: 1024