CUDA out of memory with enough GPU memory

When training FCN8s on ADE20k, OOM occurs on my RTX 2080Ti GPU (GPU memory: 11GB) with 512*512 image size and 1 image per batch. It seems that only about 3GB is consumed. However, when the same code is running on GTX 1080, OOM doesn’t occur. So what’s wrong with my GPU?

While I insert IPython.embed in the breaking point, and directly run the next computation step, it worked without OOM. Why does this strange bug occur?

To avoid using IPython.embed, I need to append such code:

try:
    score_fr = self.score_fr(x5)
except RuntimeError:
    score_fr = self.score_fr(x5)

Replying your original question, there was an issue with conv using more memory than needed on rtx cards. It is fixed on master & nightlies now. The rtx cards and cuda 10 are released after our latest release, so we couldn’t have known at the time :stuck_out_tongue: