Why GPU out of memory at the second iteration

I used a UNet model for image segementation task. After I built up the model , I tried to use a dummy tensor to test the forward() process of the model. The code script is shown below:

However, after the first iteration when console output the size of output tensor, I met a GPU OOM error.

Is this caused by I create too much tensor variables during the forward() method of UNet?

Have a look at this post, which explains this issue due to Python’s function scoping.
You might want to warp your training and evaluation in separate methods so that tensors can be freed when you return from these methods.

1 Like

Thank you so much! Wrap the train and test methods separately do help!