RuntimeError: CUDA out of memory. Tried to allocate 15.94 GiB (GPU 0; 15.78 GiB total capacity; 641.30 MiB already allocated; 13.86 GiB free; 642.00 MiB reserved in total by PyTorch)
i got this error when i train model
RuntimeError: CUDA out of memory. Tried to allocate 15.94 GiB (GPU 0; 15.78 GiB total capacity; 641.30 MiB already allocated; 13.86 GiB free; 642.00 MiB reserved in total by PyTorch)
i got this error when i train model
The error indicates that your GPU is running out of memory, so you would have to e.g. reduce the batch size of the input, lower the memory usage by using a smaller model, or e.g. use torch.utils.checkpoint
to trade compute for memory.
The problem is solved when I reduce the size of data(images) but the information of images are disappeared
the actual size of images is 1024
and i put them as 224