Unable to overfit and converge when using maskrcnn_resnet50_fpn with one image for training

Hello,

I am a beginner in PyTorch and Deep Learning in general. I am trying to train a maskrcnn_resnet50_fpn (https://pytorch.org/docs/stable/torchvision/models.html#torchvision.models.detection.maskrcnn_resnet50_fpn) but I cannot make the model converge even when using 10 Epocs to train a single image. I am basically trying to overfit my model using one training example in order to do a sanity check as there’s no point in training the model on gigabytes of data using a GPU when I can’t even overfit it by training on a single image.

I created a toy example which can be found at Capstone/toyexample/toylearner.py at master · fsafe/Capstone · GitHub

The above scrips uses only one image which I uploaded to this post ( Capstone/toyexample/004408_01_02_088.png at master · fsafe/Capstone · GitHub )

In summary I pass the model an image of a CT scan, labels, a bounding box and a segmentation mask. In the example in the above GitHub you can either use the original image or the cropped version. All you need to do is change the inputs to the model from (inputs, targets) to (inputs_crop, targets_crop). The model outputs 5 loss functions and I sum up all losses in each Epoc to obtain the total training loss. When I train using the uncropped version I get the following:

Epoch 0/9

loss_classifier:tensor(0.6876, grad_fn=)
loss_box_reg:tensor(1.3961e-06, grad_fn=)
loss_mask:tensor(1.2238, grad_fn=)
loss_objectness:tensor(0.6945, grad_fn=)
loss_rpn_box_reg:tensor(0.0133, grad_fn=)
Total Train Loss: 2.6192

Epoch 1/9

loss_classifier:tensor(0.6812, grad_fn=)
loss_box_reg:tensor(0.0225, grad_fn=)
loss_mask:tensor(0.7030, grad_fn=)
loss_objectness:tensor(0.6937, grad_fn=)
loss_rpn_box_reg:tensor(0.0104, grad_fn=)
Total Train Loss: 2.1108

Epoch 2/9

loss_classifier:tensor(0.6727, grad_fn=)
loss_box_reg:tensor(5.6627e-07, grad_fn=)
loss_mask:tensor(0.6804, grad_fn=)
loss_objectness:tensor(0.6932, grad_fn=)
loss_rpn_box_reg:tensor(0.0035, grad_fn=)
Total Train Loss: 2.0498

Epoch 3/9

loss_classifier:tensor(0.6730, grad_fn=)
loss_box_reg:tensor(1.4735e-06, grad_fn=)
loss_mask:tensor(0.6938, grad_fn=)
loss_objectness:tensor(0.6935, grad_fn=)
loss_rpn_box_reg:tensor(41.7319, grad_fn=)
Total Train Loss: 43.7921

Epoch 4/9

loss_classifier:tensor(0.6734, grad_fn=)
loss_box_reg:tensor(1.4735e-06, grad_fn=)
loss_mask:tensor(0.6936, grad_fn=)
loss_objectness:tensor(0.6932, grad_fn=)
loss_rpn_box_reg:tensor(79.3949, grad_fn=)
Total Train Loss: 81.4552

Epoch 5/9

loss_classifier:tensor(0.6725, grad_fn=)
loss_box_reg:tensor(1.4735e-06, grad_fn=)
loss_mask:tensor(0.6934, grad_fn=)
loss_objectness:tensor(0.6927, grad_fn=)
loss_rpn_box_reg:tensor(140.8336, grad_fn=)
Total Train Loss: 142.8922

Epoch 6/9

loss_classifier:tensor(0.6727, grad_fn=)
loss_box_reg:tensor(1.4735e-06, grad_fn=)
loss_mask:tensor(0.6933, grad_fn=)
loss_objectness:tensor(0.6936, grad_fn=)
loss_rpn_box_reg:tensor(232.2627, grad_fn=)
Total Train Loss: 234.3224

Epoch 7/9

loss_classifier:tensor(0.6723, grad_fn=)
loss_box_reg:tensor(1.4735e-06, grad_fn=)
loss_mask:tensor(0.6932, grad_fn=)
loss_objectness:tensor(0.6929, grad_fn=)
loss_rpn_box_reg:tensor(362.7281, grad_fn=)
Total Train Loss: 364.7866

Epoch 8/9

loss_classifier:tensor(0.6729, grad_fn=)
loss_box_reg:tensor(1.4735e-06, grad_fn=)
loss_mask:tensor(0.6932, grad_fn=)
loss_objectness:tensor(0.6926, grad_fn=)
loss_rpn_box_reg:tensor(577.8825, grad_fn=)
Total Train Loss: 579.9412

Epoch 9/9

loss_classifier:tensor(0.6734, grad_fn=)
loss_box_reg:tensor(1.4735e-06, grad_fn=)
loss_mask:tensor(0.6932, grad_fn=)
loss_objectness:tensor(0.6937, grad_fn=)
loss_rpn_box_reg:tensor(932.3105, grad_fn=)
Total Train Loss: 934.3708
Training complete in 4m 50s

Process finished with exit code 0

As you can see the RPN loss increases significantly after the third epoc.

The above run was on model that was NOT pretrained but in the code you can just set pretrained=True but that does not improve matters. Keep in mid some model surgery needs to be done when using a pretrained model ( pretrained on COCO dataset ) and I’m not sure if I am doing this part right but it appears to be right.

The above training (for a single image) was done on a CPU.

Here is the code :

import cv2
import time
import numpy as np
import torch
from torchvision.transforms import functional as TF
from torch.optim import lr_scheduler, SGD
from torchvision.models.detection import maskrcnn_resnet50_fpn
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from torchvision.models.detection.mask_rcnn import MaskRCNNPredictor

# load image from disk
img = cv2.imread('004408_01_02_088.png', -1)

# subtract 32768from the pixel intensity to obtain the original Hounsfield unit (HU) values
# https://nihcc.app.box.com/v/DeepLesion/file/306056134060
img = img.astype(np.float32, copy=False) - 32768

# intensity windowing with window [-1024, 3071] HU covers the intensity ranges of the lung, soft tissue, and bone.
# (https://arxiv.org/pdf/1806.09648.pdf)
# convert the intensities in a certain range (“window”) to 0-255 for viewing.
img -= -1024
img /= 3071 + 1024
img[img > 1] = 1
img[img < 0] = 0
img *= 255
img = img.astype('uint8')

# convert image to tensor. The output tensor will have range [0,1]
img_T = TF.to_tensor(img)

# create numpy array version of img_T Tensor and adding a blue pseudo_mask
# the addition of the mask and bounding box does not affect original image tensor (img_T)
# on this numpy version by combining 4 quarter sized ellipses. Also add a green bounding box.
# https://arxiv.org/pdf/1901.06359.pdf
img_copy = [img_T.squeeze().numpy()] * 3
img_copy = cv2.merge(img_copy)
bbox = np.array([188.354, 159.003, 223.22, 183.271])
bbox = np.int16(bbox)
cen = np.array([212.17824058, 171.81745919])
semi_axes = np.array([7, 7, 17, 6])
angles = np.array([0.94002174, -179.05997826])
cv2.ellipse(img_copy, tuple(cen.astype(int)), tuple(semi_axes[0:2]), angles[0], 0, 90, 255, -1)
cv2.ellipse(img_copy, tuple(cen.astype(int)), tuple(semi_axes[2:0:-1]), angles[1], -90, 0, 255, -1)
cv2.ellipse(img_copy, tuple(cen.astype(int)), tuple(semi_axes[2:4]), angles[1], 0, 90, 255, -1)
cv2.ellipse(img_copy, tuple(cen.astype(int)), tuple([semi_axes[0], semi_axes[3]]), angles[0], -90, 0, 255, -1)
cv2.rectangle(img_copy, (bbox[0], bbox[1]), (bbox[2], bbox[3]), (0, 255, 0), 1)

# extract pseudo_mask by identifying pixels which are colored blue
pseudo_mask = np.logical_and(img_copy[:, :, 0] == 255, img_copy[:, :, 1] == 0, img_copy[:, :, 2] == 0).astype('uint8')
pseudo_mask_T = torch.from_numpy(pseudo_mask)

# construct inputs to model
inputs = [img_T]
bbox_T = torch.from_numpy(bbox).float()
bboxes = [bbox_T]
bboxes = torch.stack(bboxes)
masks = [pseudo_mask_T]
masks = torch.stack(masks)
label = torch.ones(len(bboxes), dtype=torch.int64)
elem = {'boxes': bboxes, 'masks': masks, 'labels': label}
targets = [elem]

# # uncomment this block to check if inputs to model can be displayed correctly
# for (image, target) in zip(inputs, targets):
#     img_display = image.squeeze().numpy()
#     images_disp = [img_display] * 3
#     images_disp = [im.astype(float) for im in images_disp]
#     img_display = cv2.merge(images_disp)
#     for (bbox_disp, pseudo_mask_disp) in zip(target["boxes"], target["masks"]):
#         bbox_disp = bbox_disp.squeeze().numpy()
#         bbox_disp = np.int16(bbox)
#         mask_disp = pseudo_mask_disp.squeeze().numpy()
#         cv2.rectangle(img_display, (bbox_disp[0], bbox_disp[1]), (bbox_disp[2], bbox_disp[3]), (0, 255, 0), 1)
#         msk_idx = np.where(mask_disp == 1)
#         img_display[msk_idx[0], msk_idx[1], 0] = 255
#     cv2.imshow('original', img_display)
# cv2.waitKey(0)
# cv2.destroyAllWindows()

# crop image by clipping black borders
img_crop = img_T.squeeze().numpy()
u, d, l, r = (115, 430, 0, 511)
img_crop = img_crop[u:d + 1, l:r + 1]
bbox_crop = np.array([0, 0, 0, 0])
bbox_crop[0] = bbox[0] - l
bbox_crop[1] = bbox[1] - u
bbox_crop[2] = bbox[2] - l
bbox_crop[3] = bbox[3] - u
bbox_crop = np.int16(bbox_crop)
pseudo_mask_crop = pseudo_mask[u:d + 1, l:r + 1]

# construct inputs to model
img_crop_T = TF.to_tensor(img_crop)
inputs_crop = [img_crop_T]
bbox_crop_T = torch.from_numpy(bbox_crop).float()
bboxes_crop = [bbox_crop_T]
bboxes_crop = torch.stack(bboxes_crop)
pseudo_mask_crop_T = torch.from_numpy(pseudo_mask_crop)
masks_crop = [pseudo_mask_crop_T]
masks_crop = torch.stack(masks_crop)
label_crop = torch.ones(len(bboxes_crop), dtype=torch.int64)
elem_crop = {'boxes': bboxes_crop, 'masks': masks_crop, 'labels': label_crop}
targets_crop = [elem_crop]

# # uncomment this block to check if inputs to model can be displayed correctly
# for (image, target) in zip(inputs_crop, targets_crop):
#     img_display = image.squeeze().numpy()
#     images_disp = [img_display] * 3
#     images_disp = [im.astype(float) for im in images_disp]
#     img_display = cv2.merge(images_disp)
#     for (bbox_disp, pseudo_mask_disp) in zip(target["boxes"], target["masks"]):
#         bbox_disp = bbox_disp.squeeze().numpy()
#         bbox_disp = np.int16(bbox_disp)
#         mask_disp = pseudo_mask_disp.squeeze().numpy()
#         cv2.rectangle(img_display, (bbox_disp[0], bbox_disp[1]), (bbox_disp[2], bbox_disp[3]), (0, 255, 0), 1)
#         msk_idx = np.where(mask_disp == 1)
#         img_display[msk_idx[0], msk_idx[1], 0] = 255
#     cv2.imshow('cropped', img_display)
# cv2.waitKey(0)
# cv2.destroyAllWindows()

pretrained = False

if pretrained:
    model = maskrcnn_resnet50_fpn(pretrained=True)
    for param in model.parameters():
        param.requires_grad = False

    num_classes = 2  # 1 class (lesion) + 0 (background)

    # get number of input features for the classifier
    in_features = model.roi_heads.box_predictor.cls_score.in_features
    # replace the pre-trained head with a new one
    model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)

    # now get the number of input features for the mask classifier
    in_features_mask = model.roi_heads.mask_predictor.conv5_mask.in_channels
    hidden_layer = 64
    # and replace the mask predictor with a new one
    model.roi_heads.mask_predictor = MaskRCNNPredictor(in_features_mask,
                                                       hidden_layer,
                                                       num_classes)

    params = [p for p in model.parameters() if p.requires_grad]

    # Observe that not all parameters are being optimized
    optimizer_ft = SGD(params, lr=0.001, momentum=0.9, weight_decay=0.0001)

else:
    # Observe that all parameters are being optimized
    model = maskrcnn_resnet50_fpn(num_classes=2)
    optimizer_ft = SGD(model.parameters(), lr=0.001, momentum=0.9, weight_decay=0.0001)

# don't know how to initialize the weights of the model
# torch.nn.init.kaiming_normal_(model.parameters(), mode='fan_out')

# Decay LR by a factor of 0.1 every 2 epochs
exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft, step_size=2, gamma=0.1)

num_epochs = 100
since = time.time()
model.train()

print('Pretrained:' + str(pretrained))
print('momentum:' + str(optimizer_ft.state_dict()['param_groups'][0]['momentum']))
print('weight_decay:' + str(optimizer_ft.state_dict()['param_groups'][0]['weight_decay']))
print('LR decay gamma:' + str(exp_lr_scheduler.state_dict()['gamma']))
print('LR decay step size:' + str(exp_lr_scheduler.state_dict()['step_size']))

for epoch in range(num_epochs):
    print('\nEpoch {}/{}'.format(epoch, num_epochs - 1))
    print('-' * 10)
    print('lr:' + str(optimizer_ft.state_dict()['param_groups'][0]['lr']))
    # valid inputs to model are (inputs, targets) or (inputs_crop, targets_crop)
    loss_dict = model(inputs, targets)
    for (k, i) in loss_dict.items():
        print(str(k) + ':' + str(i))
    losses = sum(loss for loss in loss_dict.values())
    del loss_dict
    print('Total Train Loss: {:.4f}'.format(losses.item()))

    # zero the parameter gradients
    optimizer_ft.zero_grad()
    # perform backward propagation, optimization and update model parameters
    losses.backward()
    optimizer_ft.step()
    exp_lr_scheduler.step()
    del losses

time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))

1 Like

Which experiments did you already try? Your idea of a sanity check is reasonable and I cannot find an obvious bug by skimming through your code.
Did you already changed the learning rate and checked, if the loss behavior changes?
Also, are the gradients blowing up?

I experimented with a few learning rates generally but starting from the forth epoc (3/9 where 0/9 is the first epoc) the loss diverges and gets bigger after each iteration. When I exceed 10 epocs the program terminates due to not having enough memory (my laptop runs on a CPU and has 8GB RAM). I thought if I added a line to delete the model output “loss_dict” (5 tensors with I am assuming 5 separate computational graphs) after assigning the sum of the losses of “loss_dict” to “losses” this would solve the issue but it did not. I also delete “losses” after the optimizations and parameter updates are done but I’m still running out of memory. I modified the code to show where I am inserting the del statements.

Here are a few sample experiments I did:

EXP1:

Pretrained:True
momentum:0.9
weight_decay:0.0001
LR decay gamma:0.1
LR decay step size:2

Epoch 0/99

lr:0.001
loss_classifier:tensor(0.9254, grad_fn=)
loss_box_reg:tensor(0.0304, grad_fn=)
loss_mask:tensor(2.1443, grad_fn=)
loss_objectness:tensor(0.1386)
loss_rpn_box_reg:tensor(0.0056)
Total Train Loss: 3.2443

Epoch 1/99

lr:0.001
loss_classifier:tensor(0.7347, grad_fn=)
loss_box_reg:tensor(0.0005, grad_fn=)
loss_mask:tensor(2.7981, grad_fn=)
loss_objectness:tensor(0.3345)
loss_rpn_box_reg:tensor(0.0202)
Total Train Loss: 3.8879

Epoch 2/99

lr:0.0001
loss_classifier:tensor(0.4770, grad_fn=)
loss_box_reg:tensor(0.0003, grad_fn=)
loss_mask:tensor(1.5643, grad_fn=)
loss_objectness:tensor(0.0863)
loss_rpn_box_reg:tensor(0.0060)
Total Train Loss: 2.1340

Epoch 3/99

lr:0.0001
loss_classifier:tensor(0.4392, grad_fn=)
loss_box_reg:tensor(3.1149e-06, grad_fn=)
loss_mask:tensor(0.8555, grad_fn=)
loss_objectness:tensor(16.7120)
loss_rpn_box_reg:tensor(43.9244)
Total Train Loss: 61.9311

Epoch 4/99

lr:1.0000000000000003e-05
loss_classifier:tensor(0.4223, grad_fn=)
loss_box_reg:tensor(3.1140e-06, grad_fn=)
loss_mask:tensor(0.8396, grad_fn=)
loss_objectness:tensor(17.1041)
loss_rpn_box_reg:tensor(81.0183)
Total Train Loss: 99.3843

Epoch 5/99

lr:1.0000000000000003e-05
loss_classifier:tensor(0.4154, grad_fn=)
loss_box_reg:tensor(3.1139e-06, grad_fn=)
loss_mask:tensor(0.8382, grad_fn=)
loss_objectness:tensor(17.0591)
loss_rpn_box_reg:tensor(132.7647)
Total Train Loss: 151.0775

Epoch 6/99

lr:1.0000000000000002e-06
loss_classifier:tensor(0.4108, grad_fn=)
loss_box_reg:tensor(3.1138e-06, grad_fn=)
loss_mask:tensor(0.8367, grad_fn=)
loss_objectness:tensor(17.5224)
loss_rpn_box_reg:tensor(223.1389)
Total Train Loss: 241.9087

Epoch 7/99

lr:1.0000000000000002e-06
loss_classifier:tensor(0.4144, grad_fn=)
loss_box_reg:tensor(3.1138e-06, grad_fn=)
loss_mask:tensor(0.8365, grad_fn=)
loss_objectness:tensor(18.0877)
loss_rpn_box_reg:tensor(382.8543)
Total Train Loss: 402.1930

Epoch 8/99

lr:1.0000000000000002e-07
loss_classifier:tensor(0.4104, grad_fn=)
loss_box_reg:tensor(3.1138e-06, grad_fn=)
loss_mask:tensor(0.8364, grad_fn=)
loss_objectness:tensor(17.2382)
loss_rpn_box_reg:tensor(582.1258)
Total Train Loss: 600.6108

Epoch 9/99

lr:1.0000000000000002e-07
loss_classifier:tensor(0.4049, grad_fn=)
loss_box_reg:tensor(3.1138e-06, grad_fn=)
loss_mask:tensor(0.8364, grad_fn=)
loss_objectness:tensor(17.4174)
loss_rpn_box_reg:tensor(895.7090)
Total Train Loss: 914.3677

Epoch 10/99

lr:1.0000000000000004e-08
Traceback (most recent call last):
File “C:/Users/farbo/Documents/GitHub/Springboard/Capstone/toyexample/toylearner.py”, line 175, in
loss_dict = model(inputs, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\generalized_rcnn.py”, line 47, in forward
images, targets = self.transform(images, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 41, in forward
image, target = self.resize(image, target)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 81, in resize
mask = misc_nn_ops.interpolate(mask[None].float(), scale_factor=scale_factor)[0].byte()
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\ops\misc.py”, line 101, in interpolate
input, size, scale_factor, mode, align_corners
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\functional.py”, line 2485, in interpolate
return torch._C._nn.upsample_nearest2d(input, _output_size(2))
RuntimeError: [enforce fail at …\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 19241573796 bytes. Buy new RAM!

Process finished with exit code 1

EXP 2 :

Pretrained:False
momentum:0.9
weight_decay:0.0001
LR decay gamma:0.1
LR decay step size:2

Epoch 0/99

lr:0.001
loss_classifier:tensor(0.6455, grad_fn=)
loss_box_reg:tensor(1.5951e-05, grad_fn=)
loss_mask:tensor(0.9080, grad_fn=)
loss_objectness:tensor(0.6955, grad_fn=)
loss_rpn_box_reg:tensor(0.0125, grad_fn=)
Total Train Loss: 2.2615

Epoch 1/99

lr:0.001
loss_classifier:tensor(0.6415, grad_fn=)
loss_box_reg:tensor(9.1854e-06, grad_fn=)
loss_mask:tensor(0.7481, grad_fn=)
loss_objectness:tensor(0.6944, grad_fn=)
loss_rpn_box_reg:tensor(0.0104, grad_fn=)
Total Train Loss: 2.0944

Epoch 2/99

lr:0.0001
loss_classifier:tensor(0.6334, grad_fn=)
loss_box_reg:tensor(1.4011e-06, grad_fn=)
loss_mask:tensor(0.6792, grad_fn=)
loss_objectness:tensor(0.6945, grad_fn=)
loss_rpn_box_reg:tensor(0.0034, grad_fn=)
Total Train Loss: 2.0105

Epoch 3/99

lr:0.0001
loss_classifier:tensor(0.6323, grad_fn=)
loss_box_reg:tensor(2.0635e-06, grad_fn=)
loss_mask:tensor(0.6925, grad_fn=)
loss_objectness:tensor(0.6919, grad_fn=)
loss_rpn_box_reg:tensor(44.4427, grad_fn=)
Total Train Loss: 46.4594

Epoch 4/99

lr:1.0000000000000003e-05
loss_classifier:tensor(0.6304, grad_fn=)
loss_box_reg:tensor(2.0636e-06, grad_fn=)
loss_mask:tensor(0.6914, grad_fn=)
loss_objectness:tensor(0.6912, grad_fn=)
loss_rpn_box_reg:tensor(77.8039, grad_fn=)
Total Train Loss: 79.8169

Epoch 5/99

lr:1.0000000000000003e-05
loss_classifier:tensor(0.6314, grad_fn=)
loss_box_reg:tensor(2.0636e-06, grad_fn=)
loss_mask:tensor(0.6912, grad_fn=)
loss_objectness:tensor(0.6912, grad_fn=)
loss_rpn_box_reg:tensor(138.1149, grad_fn=)
Total Train Loss: 140.1288

Epoch 6/99

lr:1.0000000000000002e-06
loss_classifier:tensor(0.6308, grad_fn=)
loss_box_reg:tensor(2.0636e-06, grad_fn=)
loss_mask:tensor(0.6911, grad_fn=)
loss_objectness:tensor(0.6919, grad_fn=)
loss_rpn_box_reg:tensor(225.4656, grad_fn=)
Total Train Loss: 227.4794

Epoch 7/99

lr:1.0000000000000002e-06
loss_classifier:tensor(0.6300, grad_fn=)
loss_box_reg:tensor(2.0636e-06, grad_fn=)
loss_mask:tensor(0.6911, grad_fn=)
loss_objectness:tensor(0.6910, grad_fn=)
loss_rpn_box_reg:tensor(360.3577, grad_fn=)
Total Train Loss: 362.3698

Epoch 8/99

lr:1.0000000000000002e-07
loss_classifier:tensor(0.6301, grad_fn=)
loss_box_reg:tensor(2.0636e-06, grad_fn=)
loss_mask:tensor(0.6911, grad_fn=)
loss_objectness:tensor(0.6912, grad_fn=)
loss_rpn_box_reg:tensor(544.0331, grad_fn=)
Total Train Loss: 546.0455

Epoch 9/99

lr:1.0000000000000002e-07
loss_classifier:tensor(0.6296, grad_fn=)
loss_box_reg:tensor(2.0636e-06, grad_fn=)
loss_mask:tensor(0.6911, grad_fn=)
loss_objectness:tensor(0.6913, grad_fn=)
loss_rpn_box_reg:tensor(903.4316, grad_fn=)
Total Train Loss: 905.4436

Epoch 10/99

lr:1.0000000000000004e-08
Traceback (most recent call last):
File “C:/Users/farbo/Documents/GitHub/Springboard/Capstone/toyexample/toylearner.py”, line 175, in
loss_dict = model(inputs, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\generalized_rcnn.py”, line 47, in forward
images, targets = self.transform(images, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 41, in forward
image, target = self.resize(image, target)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 81, in resize
mask = misc_nn_ops.interpolate(mask[None].float(), scale_factor=scale_factor)[0].byte()
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\ops\misc.py”, line 101, in interpolate
input, size, scale_factor, mode, align_corners
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\functional.py”, line 2485, in interpolate
return torch._C._nn.upsample_nearest2d(input, _output_size(2))
RuntimeError: [enforce fail at …\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 19241573796 bytes. Buy new RAM!

Process finished with exit code 1

EXP 3:

Pretrained:True
momentum:0.9
weight_decay:0.0001
LR decay gamma:0.1
LR decay step size:2

Epoch 0/99

lr:1e-06
loss_classifier:tensor(1.0575, grad_fn=)
loss_box_reg:tensor(0.0339, grad_fn=)
loss_mask:tensor(1.5133, grad_fn=)
loss_objectness:tensor(0.1397)
loss_rpn_box_reg:tensor(0.0056)
Total Train Loss: 2.7500

Epoch 1/99

lr:1e-06
loss_classifier:tensor(1.0740, grad_fn=)
loss_box_reg:tensor(0.0004, grad_fn=)
loss_mask:tensor(1.8356, grad_fn=)
loss_objectness:tensor(0.3324)
loss_rpn_box_reg:tensor(0.0202)
Total Train Loss: 3.2626

Epoch 2/99

lr:1e-07
loss_classifier:tensor(1.0606, grad_fn=)
loss_box_reg:tensor(0.0001, grad_fn=)
loss_mask:tensor(1.3172, grad_fn=)
loss_objectness:tensor(0.1014)
loss_rpn_box_reg:tensor(0.0060)
Total Train Loss: 2.4854

Epoch 3/99

lr:1e-07
loss_classifier:tensor(1.0685, grad_fn=)
loss_box_reg:tensor(2.1282e-06, grad_fn=)
loss_mask:tensor(0.8204, grad_fn=)
loss_objectness:tensor(18.2750)
loss_rpn_box_reg:tensor(44.1334)
Total Train Loss: 64.2974

Epoch 4/99

lr:1.0000000000000002e-08
loss_classifier:tensor(1.0588, grad_fn=)
loss_box_reg:tensor(2.1282e-06, grad_fn=)
loss_mask:tensor(0.8204, grad_fn=)
loss_objectness:tensor(17.2795)
loss_rpn_box_reg:tensor(78.6653)
Total Train Loss: 97.8240

Epoch 5/99

lr:1.0000000000000002e-08
loss_classifier:tensor(1.0760, grad_fn=)
loss_box_reg:tensor(2.1282e-06, grad_fn=)
loss_mask:tensor(0.8208, grad_fn=)
loss_objectness:tensor(17.1733)
loss_rpn_box_reg:tensor(140.2065)
Total Train Loss: 159.2766

Epoch 6/99

lr:1.0000000000000003e-09
loss_classifier:tensor(1.0520, grad_fn=)
loss_box_reg:tensor(2.1282e-06, grad_fn=)
loss_mask:tensor(0.8211, grad_fn=)
loss_objectness:tensor(17.3521)
loss_rpn_box_reg:tensor(219.8562)
Total Train Loss: 239.0814

Epoch 7/99

lr:1.0000000000000003e-09
loss_classifier:tensor(1.0626, grad_fn=)
loss_box_reg:tensor(2.1282e-06, grad_fn=)
loss_mask:tensor(0.8212, grad_fn=)
loss_objectness:tensor(16.9220)
loss_rpn_box_reg:tensor(360.1990)
Total Train Loss: 379.0047

Epoch 8/99

lr:1.0000000000000002e-10
loss_classifier:tensor(1.0660, grad_fn=)
loss_box_reg:tensor(2.1282e-06, grad_fn=)
loss_mask:tensor(0.8213, grad_fn=)
loss_objectness:tensor(17.2279)
loss_rpn_box_reg:tensor(582.5305)
Total Train Loss: 601.6457

Epoch 9/99

lr:1.0000000000000002e-10
loss_classifier:tensor(1.0705, grad_fn=)
loss_box_reg:tensor(2.1282e-06, grad_fn=)
loss_mask:tensor(0.8214, grad_fn=)
loss_objectness:tensor(16.8634)
loss_rpn_box_reg:tensor(893.6630)
Total Train Loss: 912.4183

Epoch 10/99

lr:1.0000000000000003e-11
Traceback (most recent call last):
File “C:/Users/farbo/Documents/GitHub/Springboard/Capstone/toyexample/toylearner.py”, line 175, in
loss_dict = model(inputs, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\generalized_rcnn.py”, line 47, in forward
images, targets = self.transform(images, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 41, in forward
image, target = self.resize(image, target)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 81, in resize
mask = misc_nn_ops.interpolate(mask[None].float(), scale_factor=scale_factor)[0].byte()
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\ops\misc.py”, line 101, in interpolate
input, size, scale_factor, mode, align_corners
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\functional.py”, line 2485, in interpolate
return torch._C._nn.upsample_nearest2d(input, _output_size(2))
RuntimeError: [enforce fail at …\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 19241573796 bytes. Buy new RAM!

Process finished with exit code 1


EXP 4:

Pretrained:False
momentum:0.9
weight_decay:0.0001
LR decay gamma:0.1
LR decay step size:2

Epoch 0/99

lr:1e-06
loss_classifier:tensor(0.6587, grad_fn=)
loss_box_reg:tensor(0.0081, grad_fn=)
loss_mask:tensor(1.0298, grad_fn=)
loss_objectness:tensor(0.6957, grad_fn=)
loss_rpn_box_reg:tensor(0.0133, grad_fn=)
Total Train Loss: 2.4057

Epoch 1/99

lr:1e-06
loss_classifier:tensor(0.6585, grad_fn=)
loss_box_reg:tensor(5.9168e-07, grad_fn=)
loss_mask:tensor(0.9116, grad_fn=)
loss_objectness:tensor(0.6954, grad_fn=)
loss_rpn_box_reg:tensor(0.0105, grad_fn=)
Total Train Loss: 2.2759

Epoch 2/99

lr:1e-07
loss_classifier:tensor(0.6585, grad_fn=)
loss_box_reg:tensor(1.3808e-06, grad_fn=)
loss_mask:tensor(0.8032, grad_fn=)
loss_objectness:tensor(0.6952, grad_fn=)
loss_rpn_box_reg:tensor(0.0035, grad_fn=)
Total Train Loss: 2.1605

Epoch 3/99

lr:1e-07
loss_classifier:tensor(0.6584, grad_fn=)
loss_box_reg:tensor(1.5408e-06, grad_fn=)
loss_mask:tensor(0.7098, grad_fn=)
loss_objectness:tensor(0.6902, grad_fn=)
loss_rpn_box_reg:tensor(42.8156, grad_fn=)
Total Train Loss: 44.8740

Epoch 4/99

lr:1.0000000000000002e-08
loss_classifier:tensor(0.6580, grad_fn=)
loss_box_reg:tensor(1.5408e-06, grad_fn=)
loss_mask:tensor(0.7098, grad_fn=)
loss_objectness:tensor(0.6905, grad_fn=)
loss_rpn_box_reg:tensor(83.0204, grad_fn=)
Total Train Loss: 85.0788

Epoch 5/99

lr:1.0000000000000002e-08
loss_classifier:tensor(0.6591, grad_fn=)
loss_box_reg:tensor(1.5408e-06, grad_fn=)
loss_mask:tensor(0.7099, grad_fn=)
loss_objectness:tensor(0.6908, grad_fn=)
loss_rpn_box_reg:tensor(138.3920, grad_fn=)
Total Train Loss: 140.4518

Epoch 6/99

lr:1.0000000000000003e-09
loss_classifier:tensor(0.6594, grad_fn=)
loss_box_reg:tensor(1.5408e-06, grad_fn=)
loss_mask:tensor(0.7099, grad_fn=)
loss_objectness:tensor(0.6906, grad_fn=)
loss_rpn_box_reg:tensor(214.6539, grad_fn=)
Total Train Loss: 216.7137

Epoch 7/99

lr:1.0000000000000003e-09
loss_classifier:tensor(0.6589, grad_fn=)
loss_box_reg:tensor(1.5408e-06, grad_fn=)
loss_mask:tensor(0.7099, grad_fn=)
loss_objectness:tensor(0.6908, grad_fn=)
loss_rpn_box_reg:tensor(371.0978, grad_fn=)
Total Train Loss: 373.1573

Epoch 8/99

lr:1.0000000000000002e-10
loss_classifier:tensor(0.6588, grad_fn=)
loss_box_reg:tensor(1.5408e-06, grad_fn=)
loss_mask:tensor(0.7099, grad_fn=)
loss_objectness:tensor(0.6902, grad_fn=)
loss_rpn_box_reg:tensor(565.0930, grad_fn=)
Total Train Loss: 567.1518

Epoch 9/99

lr:1.0000000000000002e-10
loss_classifier:tensor(0.6583, grad_fn=)
loss_box_reg:tensor(1.5408e-06, grad_fn=)
loss_mask:tensor(0.7099, grad_fn=)
loss_objectness:tensor(0.6898, grad_fn=)
loss_rpn_box_reg:tensor(877.6884, grad_fn=)
Total Train Loss: 879.7464

Epoch 10/99

lr:1.0000000000000003e-11
Traceback (most recent call last):
File “C:/Users/farbo/Documents/GitHub/Springboard/Capstone/toyexample/toylearner.py”, line 175, in
loss_dict = model(inputs, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\generalized_rcnn.py”, line 47, in forward
images, targets = self.transform(images, targets)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\modules\module.py”, line 547, in call
result = self.forward(*input, **kwargs)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 41, in forward
image, target = self.resize(image, target)
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\models\detection\transform.py”, line 81, in resize
mask = misc_nn_ops.interpolate(mask[None].float(), scale_factor=scale_factor)[0].byte()
File “C:\Users\farbo\Anaconda3\lib\site-packages\torchvision\ops\misc.py”, line 101, in interpolate
input, size, scale_factor, mode, align_corners
File “C:\Users\farbo\Anaconda3\lib\site-packages\torch\nn\functional.py”, line 2485, in interpolate
return torch._C._nn.upsample_nearest2d(input, _output_size(2))
RuntimeError: [enforce fail at …\c10\core\CPUAllocator.cpp:72] data. DefaultCPUAllocator: not enough memory: you tried to allocate 19241573796 bytes. Buy new RAM!

Process finished with exit code 1

If I don’t crop the image and change

model = maskrcnn_resnet50_fpn(pretrained=True)

to

model = maskrcnn_resnet50_fpn(pretrained=True, max_size=512)

It prevents the model losses to explode and I also don’t get an out of memory error. My model still cannot predict correctly when given the exact same image during inference but maybe I need to experiment more with the parameters.

When I pass the cropped image I don’t face the out of memory issues but the results are not as good as when I pass the uncropped image.

Hey,

I took a look at your code but can’t see any obvious issues.

You might consider starting with the known-to-work torchvision reference (https://github.com/pytorch/vision/tree/master/references/detection) and adjusting it for your use case. I had good luck with this approach.

If you do have more than one image annotated, try using more than one image and a proper Dataset/DataLoader. You shouldn’t have any problems making the model overfit.

Hey there, a quick question, since I’m also having problems with Mask RCNN not converging, could this be do to the fact that our batch size is 1 and ResNet uses batch normalization in the layers? My batch size is 1 due to hardware limitations and you’re basically feeding the network the same picture over and over, making your batch size 1. Batch normalization progressively deteriorates as batch size goes down and isn’t intuitive with batch size of 1 at all.

Not an expert on this topic by any means so I’m asking here for someone with more knowledge.