Memory estimates for GPU computation?

Hi all,

My model works, running slow, on my AWS instance’s CPU. When I drop it down to GPU, however, I run out of memory. I’d like to increase my GPU’s memory size, but I’m not sure how much I’ll need.

Any way to estimate this?

It’s difficult to estimate. But did you install cuDNN? It saves a lot of memory, especially on convs.

I’m having trouble figuring out whether cuDNN is on my machine. I do have torch, and I thought it was included in that installation?

You could try reading the version of your cuDNN installation with:

torch.backends.cudnn.version()

I suppose it should be empty or throw an error, if no cuDNN is detected.