Memory usage for UNET


Before building model, i was trying to use mateuszbuda brainsegmentation unet to see how it perfom on my multiple scelerosis dataset. I have rtx 3070 with 8gb ram, and I am getting following error/warning: RuntimeError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 7.77 GiB total capacity; 6.12 GiB already allocated; 35.75 MiB free; 6.22 GiB reserved in total by PyTorch)

Does anyone have an advice for me?

The link of the model is down below:

You would have to reduce the memory usage of the script e.g. by lowering the batch size, using torch.utils.checkpoint to trade compute for memory, or by using a smaller model or input data.

Is there a way to see how much memory the model using without any input data?

That’s not easily doable, since you would need to infer the output shapes of the intermediate activations for all operations. In case you are able to do so, you would also have to stick to all native kernels, since e.g. cudnn uses different workspace sizes for different algorithms, which might be selected if cudnn.benchmark is active.

Because a friend of mine managed to fit the unet and train it on GTX1070, but i get this following error:

Runetimeerror : cuda out of memory. Tried to allocate 12mib (GPU 0; 7.77 GB total capacity; 6.12 GB already allocated; 35mb free; 6.22 GB is resvered in total by pytorch)

So kinda confused. This happens when it goes inside the training phase.