I’m not sure whether this is a bug or I’m not understanding how pytorch works, hence why I’m posted here first. I’m using pytorch 1.5.0.
My model is running on the gpu and I convert each batch to the device at the beginning, then forward through the model. When I then move it to CPU however, it doesn’t seem to free the GPU memory. When the loop comes around again, the memory still isn’t freed and ends up with an out of memory issue after a few loops.
mem_a = torch.cuda.memory_summary(device, True) b = b.to(device) mem_a2 = torch.cuda.memory_summary(device, True) forward = model.forward(b) mem_a21 = torch.cuda.memory_summary(device, True) forward = forward.to("cpu") mem_a3 = torch.cuda.memory_summary(device, True)
I’ve checked in the debugger, and the forward after the
.to() statement says it is on the cpu, however if you look at the GPU ram below from these positions…
Loop 1: mem_a = 238MB mem_a2 = 778MB mem_a21 = 3471MB mem_a3 = 3471MB Loop 2: mem_a = 3471MB mem_a2 = 3781MB mem_a21 = 5382MB mem_a3 = 5382MB Loop 3: mem_a = 5382 mem_a2 = 5693 Out of Memory
To me, it very much looks like pytorch isn’t clearing up the link to the GPU after moving to cpu. I’ve tried this in a number of ways, even separating the gpu/cpu variable and deleting the gpu one still doesn’t work, as if it maintains a link somewhere. If I delete while it’s still on gpu the memory usage disappears, so it definitely seems to be something around this
.to() statement that is keeping a link to the GPU.
I’m needing to store the output of this all before being able to do what I do next, and have more than enough RAM for it if I can get it put properly get it out of the GPU, however I can’t seem to do that.
Am I doing something wrong here?