CUDA out of memory, but it shows enough memory available in error expression

Dear all,
I train a CNN with GPU and get an error information as figure shows. 30.00 MiB data is to be allicated and 967.43 MiB free but a CUDA out of memory is given:


I have no idea on it

Could you check the memory usage via nvidia-smi, please?
Are other processes using memory on your device?

I set batch_size to 1 and it works.
When I set batch_size to 2, and the same error is reported again:
“RuntimeError: CUDA out of memory. Tried to allocate 30.00 MiB (GPU 0; 2.00 GiB total capacity; 784.67 MiB already allocated; 551.43 MiB free; 31.33 MiB cached)”.
I use nvidia-smi and only this process exists.
Do you have other suggestions?
Thanks!

In that case the error message might be misleading.
How much memory is your training code using with batch_size=1?

It shows around 881MiB of 2048MiB as batch_size = 1

It occupies more than 1500MiB as batch_size = 1 and I will modify my net and data.
Thank you!