Hi,
I have some RAM issue when training a lot of models on the same machine. I have really small ones but I see a big increase in RAM memory when I use model.to(device)
, device
being a cuda:x
. I managed to see this with memory_profiler. Here is the trace :
Mem usage Increment Line Contents
==========================================
...
233.062 MiB 1.496 MiB model = Network()
2260.094 MiB 1947.422 MiB model = model.to(device)
...
Does anyone has a suggestion ?