OutOfMemoryError Hlep please

Hi guys,
I am getting this error code when running Stable Diffusion, Any fixes would be great. I am not very tech savvy so please be gentle :wink:

OutOfMemoryError: CUDA out of memory. Tried to allocate 480.00 MiB (GPU 0; 6.00 GiB total capacity; 4.54 GiB already allocated; 0 bytes free; 4.60 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

Running GeForce GTX 1660 SUPER on Windows 11

Try to reduce the batch size which would also reduce the required GPU memory for the execution.

how would I go about that mate?

Batch size is only one.

If the batch size is already 1 you could check if torch.utils.checkpoint can be applied to the model to trade compute for memory (assuming you want to train the model).
If you are not interested in training you should make sure the model runs in a with torch.inference_mode() context.
Also, you might want to check if running the model in a with torch.cuda.amp.autocast() context would help.
However, I’m not familiar with your use case as you haven’t shared much information so I don’t know if any of these utils. are already applied.