CUDA out of memory after error

Hi, I’ve been training U-Net with medical imagery dataset. I used 1 of 2 RTX 2080 ti by allocating my model only on the first one. After learning through all epochs, my code crashed because there was an error in plotting images in Matplotlib. After fixing the error, I couldn’t run my code again due to this error.

RuntimeError: CUDA out of memory. Tried to allocate 98.00 MiB (GPU 0; 10.76 GiB total capacity; 9.58 GiB already allocated; 61.75 MiB free; 9.74 GiB reserved in total by PyTorch)

But when I display nvidia-smi, there is no process related to PyTorch.


I tried adding torch.cuda.empty_cache() but it didn’t seem to help. Are there ways to solve this problem?

Thank you :slight_smile:

Can you post the code after fixing matplotlib error?
In general, using a larger batch size can cause OOM. Also the reason you were not able to see any process related to PyTorch is because the process actually got killed due to OOM.

Using torch.cuda.empty_cache() can free already occupied memory, but here it looks like your process never really started. Would be better if you post your full code to get more clarity :slight_smile:

Sorry that I can’t show you the full code… But I can show you matplotlib part. It was just saving plots and I had to fix it because the output image size didn’t match.

    x_temp, y = ds_test.__getitem__(10)
    x = np.expand_dims(x_temp, axis=0)
    x = torch.from_numpy(x)
    x =
    y = cv2.cvtColor(y, cv2.COLOR_BGR2RGB)
    y_pred = model(x)
    print(f'x shape: {x.shape}, y shape: {y_pred.shape}')
    fig = plt.figure()
    rows = 1
    cols = 2

    x = x.view(3, 400, 400)
    x = x.cpu().detach()
    x = transforms.ToPILImage()(x)
    x = np.array(x.convert('RGB'))

    y_pred = y_pred.view(400, 400)
    y_pred = y_pred.cpu().detach()
    y_pred = transforms.ToPILImage()(y_pred)
    y_pred = np.array(y_pred.convert('L'))

Can you use the following?

with torch.no_grad():
        y_pred = model(x)

Gradients are generally computed by default during forward call.

Yeah, I used it and it went very well!!! Thanks bro, helped a lot :sweat_smile: