Losses end up becoming NAN during training. how to debug and fix them?

Could you please help me figure why I am getting NAN loss value and how to debug and fix it?

P.S.: Why my losses are so large and how can I fix them?

After running this cell of code:

network = Network()
network.cuda()    

criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(), lr=0.0001)

loss_min = np.inf
num_epochs = 10

start_time = time.time()
for epoch in range(1,num_epochs+1):
    
    loss_train = 0
    loss_test = 0
    running_loss = 0
    
    
    network.train()
    print('size of train loader is: ', len(train_loader))

    for step in range(1,len(train_loader)+1):
  
        
        batch = next(iter(train_loader))
        images, landmarks = batch['image'], batch['landmarks']
        images = images.permute(0,3,1,2)
        
        images = images.cuda()
        
        #RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 600, 800, 3] to have 3 channels, but got 600 channels instead
    
        
    
        landmarks = landmarks.view(landmarks.size(0),-1).cuda() 
        
        
        ##images = torchvision.transforms.Normalize(images)
        ##landmarks = torchvision.transforms.Normalize(landmarks)
        
        predictions = network(images)
        
        # clear all the gradients before calculating them
        optimizer.zero_grad()
        
        # find the loss for the current step
        loss_train_step = criterion(predictions.float(), landmarks.float())
        
        
        ##loss_train_step = loss_train_step.to(torch.float32)
        
        # calculate the gradients
        loss_train_step.backward()
        
        # update the parameters
        optimizer.step()
        
        loss_train += loss_train_step.item()
        running_loss = loss_train/step
        
        print_overwrite(step, len(train_loader), running_loss, 'train')
        
    network.eval() 
    with torch.no_grad():
        
        for step in range(1,len(test_loader)+1):
            
            batch = next(iter(train_loader))
            images, landmarks = batch['image'], batch['landmarks']
            images = images.permute(0,3,1,2)
            images = images.cuda()
            landmarks = landmarks.view(landmarks.size(0),-1).cuda()
        
            predictions = network(images)

            # find the loss for the current step
            loss_test_step = criterion(predictions, landmarks)

            loss_test += loss_test_step.item()
            running_loss = loss_test/step

            print_overwrite(step, len(test_loader), running_loss, 'Testing')
    
    loss_train /= len(train_loader)
    loss_test /= len(test_loader)
    
    print('\n--------------------------------------------------')
    print('Epoch: {}  Train Loss: {:.4f}  Test Loss: {:.4f}'.format(epoch, loss_train, loss_test))
    print('--------------------------------------------------')
    
    if loss_test < loss_min:
        loss_min = loss_test
        torch.save(network.state_dict(), '../moth_landmarks.pth') 
        print("\nMinimum Test Loss of {:.4f} at epoch {}/{}".format(loss_min, epoch, num_epochs))
        print('Model Saved\n')
     
print('Training Complete')
print("Total Elapsed Time : {} s".format(time.time()-start_time))

I get the following NAN losses:

size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 1  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 2  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 3  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 4  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 5  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 6  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 7  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 8  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 9  Train Loss: nan  Test Loss: nan
--------------------------------------------------
size of train loader is:  90
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 10  Train Loss: nan  Test Loss: nan
--------------------------------------------------
Training Complete
Total Elapsed Time : 934.4894697666168 s

Here’s the network:

num_classes = 4 * 2 #4 coordinates X and Y flattened --> 4 of 2D keypoints or landmarks

class Network(nn.Module):
    def __init__(self,num_classes=8):
        super().__init__()
        self.model_name = 'resnet18'
        self.model = models.resnet18()
        self.model.fc = nn.Linear(self.model.fc.in_features, num_classes)
        
    def forward(self, x):
        x = x.float()
        out = self.model(x)
        return out

If I comment the part related to ‘normalize’ I still get the NAN loss

transformed_dataset = MothLandmarksDataset(csv_file='moth_gt.csv',
                                           root_dir='.',
                                           transform=transforms.Compose(
                                               [
                                               Rescale(256),
                                               RandomCrop(224),
                                               
                                               ToTensor()#,
                                               ##transforms.Normalize(mean = [ 0.485, 0.456, 0.406 ],
                                               ##         std = [ 0.229, 0.224, 0.225 ])
                                               ]
                                                                        )
                                           )

This is the result after I commented the transforms.Normalize and change the epochs to 1:

size of train loader is:  90
Valid Steps: 10/10  Loss: nan 8.5625 
--------------------------------------------------
Epoch: 1  Train Loss: nan  Test Loss: nan
--------------------------------------------------
Training Complete
Total Elapsed Time : 93.34211421012878 s

Here’s the log of what I see for one epochs and also commenting the transform.Normalize

size of train loader is:  90
loss_train_step before backward:  tensor(157314.2188, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(157314.2188, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  157314.21875
step:  1
running loss:  157314.21875
Train Steps: 1/90  Loss: 157314.2188 loss_train_step before backward:  tensor(172433.0312, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(172433.0312, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  329747.25
step:  2
running loss:  164873.625
Train Steps: 2/90  Loss: 164873.6250 loss_train_step before backward:  tensor(161687.8438, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(161687.8438, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  491435.09375
step:  3
running loss:  163811.69791666666
Train Steps: 3/90  Loss: 163811.6979 loss_train_step before backward:  tensor(172857.6250, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(172857.6250, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  664292.71875
step:  4
running loss:  166073.1796875
Train Steps: 4/90  Loss: 166073.1797 loss_train_step before backward:  tensor(167570.2188, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(167570.2188, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  831862.9375
step:  5
running loss:  166372.5875
Train Steps: 5/90  Loss: 166372.5875 loss_train_step before backward:  tensor(164119.4062, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(164119.4062, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  995982.34375
step:  6
running loss:  165997.05729166666
Train Steps: 6/90  Loss: 165997.0573 loss_train_step before backward:  tensor(174509.2500, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(174509.2500, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  1170491.59375
step:  7
running loss:  167213.08482142858
Train Steps: 7/90  Loss: 167213.0848 loss_train_step before backward:  tensor(167285.3906, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(167285.3906, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  1337776.984375
step:  8
running loss:  167222.123046875
Train Steps: 8/90  Loss: 167222.1230 loss_train_step before backward:  tensor(176070.6094, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(176070.6094, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  1513847.59375
step:  9
running loss:  168205.28819444444
Train Steps: 9/90  Loss: 168205.2882 loss_train_step before backward:  tensor(167046.6875, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(167046.6875, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  1680894.28125
step:  10
running loss:  168089.428125
Train Steps: 10/90  Loss: 168089.4281 loss_train_step before backward:  tensor(159272.8438, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(159272.8438, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  1840167.125
step:  11
running loss:  167287.92045454544
Train Steps: 11/90  Loss: 167287.9205 loss_train_step before backward:  tensor(169830.3125, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(169830.3125, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  2009997.4375
step:  12
running loss:  167499.78645833334
Train Steps: 12/90  Loss: 167499.7865 loss_train_step before backward:  tensor(159050.5156, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(159050.5156, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  2169047.953125
step:  13
running loss:  166849.84254807694
Train Steps: 13/90  Loss: 166849.8425 loss_train_step before backward:  tensor(166620.4375, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(166620.4375, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  2335668.390625
step:  14
running loss:  166833.4564732143
Train Steps: 14/90  Loss: 166833.4565 loss_train_step before backward:  tensor(157660.1094, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(157660.1094, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  2493328.5
step:  15
running loss:  166221.9
Train Steps: 15/90  Loss: 166221.9000 loss_train_step before backward:  tensor(157721.7969, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(157721.7969, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  2651050.296875
step:  16
running loss:  165690.6435546875
Train Steps: 16/90  Loss: 165690.6436 loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  17
running loss:  nan
Train Steps: 17/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  18
running loss:  nan
Train Steps: 18/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  19
running loss:  nan
Train Steps: 19/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  20
running loss:  nan
Train Steps: 20/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  21
running loss:  nan
Train Steps: 21/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  22
running loss:  nan
Train Steps: 22/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  23
running loss:  nan
Train Steps: 23/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  24
running loss:  nan
Train Steps: 24/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  25
running loss:  nan
Train Steps: 25/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  26
running loss:  nan
Train Steps: 26/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  27
running loss:  nan
Train Steps: 27/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  28
running loss:  nan
Train Steps: 28/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  29
running loss:  nan
Train Steps: 29/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)

loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  30
running loss:  nan
Train Steps: 30/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  31
running loss:  nan
Train Steps: 31/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  32
running loss:  nan
Train Steps: 32/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  33
running loss:  nan
Train Steps: 33/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  34
running loss:  nan
Train Steps: 34/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  35
running loss:  nan
Train Steps: 35/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  36
running loss:  nan
Train Steps: 36/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  37
running loss:  nan
Train Steps: 37/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  38
running loss:  nan
Train Steps: 38/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  39
running loss:  nan
Train Steps: 39/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  40
running loss:  nan
Train Steps: 40/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  41
running loss:  nan
Train Steps: 41/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  42
running loss:  nan
Train Steps: 42/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  43
running loss:  nan
Train Steps: 43/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  44
running loss:  nan
Train Steps: 44/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  45
running loss:  nan
Train Steps: 45/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  46
running loss:  nan
Train Steps: 46/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  47
running loss:  nan
Train Steps: 47/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  48
running loss:  nan
Train Steps: 48/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  49
running loss:  nan
Train Steps: 49/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  50
running loss:  nan
Train Steps: 50/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  51
running loss:  nan
Train Steps: 51/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  52
running loss:  nan
Train Steps: 52/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  53
running loss:  nan
Train Steps: 53/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  54
running loss:  nan
Train Steps: 54/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  55
running loss:  nan
Train Steps: 55/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  56
running loss:  nan
Train Steps: 56/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  57
running loss:  nan
Train Steps: 57/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  58
running loss:  nan
Train Steps: 58/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  59
running loss:  nan
Train Steps: 59/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  60
running loss:  nan
Train Steps: 60/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  61
running loss:  nan
Train Steps: 61/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)

loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  62
running loss:  nan
Train Steps: 62/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  63
running loss:  nan
Train Steps: 63/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  64
running loss:  nan
Train Steps: 64/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  65
running loss:  nan
Train Steps: 65/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  66
running loss:  nan
Train Steps: 66/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  67
running loss:  nan
Train Steps: 67/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  68
running loss:  nan
Train Steps: 68/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  69
running loss:  nan
Train Steps: 69/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  70
running loss:  nan
Train Steps: 70/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  71
running loss:  nan
Train Steps: 71/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  72
running loss:  nan
Train Steps: 72/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  73
running loss:  nan
Train Steps: 73/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  74
running loss:  nan
Train Steps: 74/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  75
running loss:  nan
Train Steps: 75/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  76
running loss:  nan
Train Steps: 76/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  77
running loss:  nan
Train Steps: 77/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  78
running loss:  nan
Train Steps: 78/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  79
running loss:  nan
Train Steps: 79/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  80
running loss:  nan
Train Steps: 80/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  81
running loss:  nan
Train Steps: 81/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  82
running loss:  nan
Train Steps: 82/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  83
running loss:  nan
Train Steps: 83/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  84
running loss:  nan
Train Steps: 84/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  85
running loss:  nan
Train Steps: 85/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  86
running loss:  nan
Train Steps: 86/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  87
running loss:  nan
Train Steps: 87/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  88
running loss:  nan
Train Steps: 88/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  89
running loss:  nan
Train Steps: 89/90  Loss: nan loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  90
running loss:  nan
Valid Steps: 10/10  Loss: nan 
--------------------------------------------------
Epoch: 1  Train Loss: nan Valid Loss: nan
--------------------------------------------------
Training Complete
Total Elapsed Time : 95.49646234512329 s

So the predictions are very off but how can I fix them?

What should I change in my Network?

Here’s a snippet for step 1 in epoch 1:

size of train loader is:  90
predictions are:  tensor([[-0.0380, -0.1871,  0.0729, -0.3570, -0.2153,  0.3066,  1.1273, -0.0558],
        [-0.0316, -0.1876,  0.0317, -0.3613, -0.2333,  0.3023,  1.0940, -0.0665],
        [-0.0700, -0.1882,  0.0068, -0.3201, -0.1884,  0.2953,  1.0516, -0.0567],
        [-0.0844, -0.2009,  0.0573, -0.3166, -0.2597,  0.3127,  1.0343, -0.0573],
        [-0.0486, -0.2333,  0.0535, -0.3245, -0.2310,  0.2818,  1.0590, -0.0716],
        [-0.0240, -0.1989,  0.0572, -0.3135, -0.2435,  0.2912,  1.0612, -0.0560],
        [-0.0942, -0.2439,  0.0277, -0.3147, -0.2368,  0.2978,  1.0110, -0.0874],
        [-0.0356, -0.2285,  0.0064, -0.3179, -0.2432,  0.3083,  1.0300, -0.0756]],
       device='cuda:0', grad_fn=<AddmmBackward>)
landmarks are:  tensor([[501.9200, 240.1600, 691.0000, 358.0000, 295.0000, 294.0000, 488.6482,
         279.6466],
        [495.6300, 246.0600, 692.0000, 235.0000, 286.0000, 242.0000, 464.0000,
         339.0000],
        [488.7100, 240.8900, 613.4007, 218.3425, 281.0000, 220.0000, 415.9966,
         338.4796],
        [502.5721, 245.4983, 640.0000, 131.0000, 360.0000, 143.0000, 542.9840,
         321.8463],
        [505.1393, 246.4364, 700.0000, 306.0000, 303.0000, 294.0000, 569.6925,
         351.8367],
        [501.0900, 244.0100, 724.0000, 251.0000, 302.0000, 276.0000, 504.6415,
         291.7443],
        [495.9500, 244.2800, 608.0000, 127.0000, 323.0000, 166.0000, 491.0000,
         333.0000],
        [490.2500, 241.3400, 699.0000, 304.0000, 398.6197, 313.8339, 429.1374,
         303.8483]], device='cuda:0')
loss_train_step before backward:  tensor(166475.6875, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(166475.6875, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  166475.6875
step:  1
running loss:  166475.6875
Train Steps: 1/90  Loss: 166475.6875

Here’s the full log: https://pastebin.com/raw/Jt98BvTx

Hi,

The usual things to look at here are to make sure that your inputs are properly normalized so that they have similar scale with the weights and between each other.
You also need to make sure that your learning rate is appropriate for your task to avoid too large steps that would make your network diverge.

2 Likes

Besides that, you could also try the already suggested approach to normalize the target for training and “unnormalize” it for predictions from this thread.

2 Likes

So, I now am using (apparently) correct values for normalize but still the predictions are very off.

network = Network()
network.cuda()    

criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(), lr=0.0001)

loss_min = np.inf
num_epochs = 1

start_time = time.time()
for epoch in range(1,num_epochs+1):
    
    loss_train = 0
    loss_test = 0
    running_loss = 0
    
    
    network.train()
    print('size of train loader is: ', len(train_loader))

    for step in range(1,len(train_loader)+1):

        
        batch = next(iter(train_loader))
        images, landmarks = batch['image'], batch['landmarks']
        #RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 600, 800, 3] to have 3 channels, but got 600 channels instead
        #using permute below to fix the above error
        images = images.permute(0,3,1,2)
        
        images = images.cuda()
    
        landmarks = landmarks.view(landmarks.size(0),-1).cuda() 
    
        norm = transforms.Normalize([0.3809, 0.3810, 0.3810], [0.1127, 0.1129, 0.1130]) 
        for image in images:
            image = image.float()
            ##image = to_tensor(image) #TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>
            image = norm(image)
        
        print(type(images))
        ##landmarks = torchvision.transforms.Normalize(landmarks) #Do I need to normalize the target?
        
        predictions = network(images)
        
        # clear all the gradients before calculating them
        optimizer.zero_grad()
        
        print('predictions are: ', predictions.float())
        print('landmarks are: ', landmarks.float())
        # find the loss for the current step
        loss_train_step = criterion(predictions.float(), landmarks.float())
        
        
        loss_train_step = loss_train_step.to(torch.float32)
        print("loss_train_step before backward: ", loss_train_step)
        
        # calculate the gradients
        loss_train_step.backward()
        
        # update the parameters
        optimizer.step()
        
        print("loss_train_step after backward: ", loss_train_step)

        
        loss_train += loss_train_step.item()
        
        print("loss_train: ", loss_train)
        running_loss = loss_train/step
        print('step: ', step)
        print('running loss: ', running_loss)
        
        print_overwrite(step, len(train_loader), running_loss, 'train')
        
    network.eval() 
    with torch.no_grad():
        
        for step in range(1,len(test_loader)+1):
            
            batch = next(iter(train_loader))
            images, landmarks = batch['image'], batch['landmarks']
            images = images.permute(0,3,1,2)
            images = images.cuda()
            landmarks = landmarks.view(landmarks.size(0),-1).cuda()
        
            predictions = network(images)

            # find the loss for the current step
            loss_test_step = criterion(predictions, landmarks)

            loss_test += loss_test_step.item()
            running_loss = loss_test/step

            print_overwrite(step, len(test_loader), running_loss, 'Validation')
    
    loss_train /= len(train_loader)
    loss_test /= len(test_loader)
    
    print('\n--------------------------------------------------')
    print('Epoch: {}  Train Loss: {:.4f} Valid Loss: {:.4f}'.format(epoch, loss_train, loss_test))
    print('--------------------------------------------------')
    
    if loss_test < loss_min:
        loss_min = loss_test
        torch.save(network.state_dict(), '../moth_landmarks.pth') 
        print("\nMinimum Valid Loss of {:.4f} at epoch {}/{}".format(loss_min, epoch, num_epochs))
        print('Model Saved\n')
     
print('Training Complete')
print("Total Elapsed Time : {} s".format(time.time()-start_time))

Here’s the run results for the first step:

size of train loader is:  90
<class 'torch.Tensor'>
predictions are:  tensor([[ 0.5567,  0.1901,  0.1691, -0.5159,  0.7102,  0.1237,  0.1823, -0.0213],
        [ 0.5388,  0.1358,  0.1788, -0.5070,  0.7536,  0.1243,  0.1586, -0.0193],
        [ 0.5594,  0.1783,  0.1923, -0.5533,  0.7156,  0.1126,  0.1507, -0.0277],
        [ 0.5570,  0.1751,  0.1592, -0.5033,  0.7256,  0.1115,  0.1789,  0.0019],
        [ 0.5629,  0.1907,  0.1947, -0.4955,  0.6791,  0.1034,  0.1385,  0.0105],
        [ 0.5977,  0.1855,  0.1474, -0.5349,  0.7066,  0.0978,  0.1709,  0.0122],
        [ 0.5796,  0.1818,  0.2099, -0.5082,  0.7344,  0.1145,  0.1653, -0.0095],
        [ 0.5708,  0.1742,  0.1952, -0.5718,  0.7394,  0.1266,  0.1485, -0.0274]],
       device='cuda:0', grad_fn=<AddmmBackward>)
landmarks are:  tensor([[496.1400, 238.9700, 684.3768, 325.7215, 307.3357, 262.1889, 469.2918,
         323.8934],
        [489.7400, 240.3700, 708.0000, 253.0000, 327.0000, 331.0000, 485.0000,
         331.0000],
        [491.0200, 243.2600, 700.0000, 285.0000, 349.0000, 301.0000, 406.9148,
         349.3146],
        [496.0200, 244.2800, 587.0000, 115.0000, 336.0000, 147.0000, 492.0000,
         331.0000],
        [492.0800, 243.5100, 565.4799, 160.8031, 272.0000, 245.0000, 462.0000,
         344.0000],
        [501.0800, 241.7700, 720.0000, 286.0000, 304.0000, 310.0000, 513.1892,
         286.2780],
        [496.5700, 246.5700, 699.0000, 300.0000, 384.0000, 338.0000, 504.0000,
         326.0000],
        [494.0100, 239.8300, 539.0000, 150.0000, 345.0000, 116.0000, 441.0000,
         345.0000]], device='cuda:0')
loss_train_step before backward:  tensor(162903.9375, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(162903.9375, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  162903.9375
step:  1
running loss:  162903.9375

I am not sure where I should divide by 255 in my code. Also, seems landmarks are also should be fed to to_tensor (ToTensor) since they are shown as the raw value.

Since landmarks are basically (x,y) pairs on the image, can you try normalizing your landmarks to have values between 0 and 1? You can do the normalizing by dividing the landmark (x,y) coordinates with height and width respectively.

So a landmark with x value 100 on a 200x200 image will be normalized to a value of 0.5

1 Like

@albanD @ptrblck and @fadetoblack

I have the following updated code:

How should I fix it now?

network = Network()
network.cuda()    

criterion = nn.MSELoss()
optimizer = optim.Adam(network.parameters(), lr=0.0001)

loss_min = np.inf
num_epochs = 1

start_time = time.time()
for epoch in range(1,num_epochs+1):
    
    loss_train = 0
    loss_test = 0
    running_loss = 0
    
    
    network.train()
    print('size of train loader is: ', len(train_loader))

    for step in range(1,len(train_loader)+1):

        
        batch = next(iter(train_loader))
        images, landmarks = batch['image'], batch['landmarks']
        #RuntimeError: Given groups=1, weight of size [64, 3, 7, 7], expected input[64, 600, 800, 3] to have 3 channels, but got 600 channels instead
        #using permute below to fix the above error
        images = images.permute(0,3,1,2)
        
        images = images.cuda()
    
        landmarks = landmarks.view(landmarks.size(0),-1).cuda() 
    
        norm_image = transforms.Normalize([0.3809, 0.3810, 0.3810], [0.1127, 0.1129, 0.1130]) 
        for image in images:
            image = image.float()
            ##image = to_tensor(image) #TypeError: pic should be PIL Image or ndarray. Got <class 'torch.Tensor'>
            image = norm_image(image)
        
        ###removing landmarks normalize because of the following error
        ###ValueError: Expected tensor to be a tensor image of size (C, H, W). Got tensor.size() = torch.Size([8, 8])
        ###norm_landmarks = transforms.Normalize(0.4949, 0.2165)
        ###landmarks = norm_landmarks(landmarks)
         
        #743 is max value in raw landmarks values
        landmarks = torch.div(landmarks, 743.)
        
        predictions = network(images)
        
        # clear all the gradients before calculating them
        optimizer.zero_grad()
        
        print('predictions are: ', predictions.float())
        print('landmarks are: ', landmarks.float())
        # find the loss for the current step
        loss_train_step = criterion(predictions.float(), landmarks.float())
        
        
        loss_train_step = loss_train_step.to(torch.float32)
        print("loss_train_step before backward: ", loss_train_step)
        
        # calculate the gradients
        loss_train_step.backward()
        
        # update the parameters
        optimizer.step()
        
        print("loss_train_step after backward: ", loss_train_step)

        
        loss_train += loss_train_step.item()
        
        print("loss_train: ", loss_train)
        running_loss = loss_train/step
        print('step: ', step)
        print('running loss: ', running_loss)
        
        print_overwrite(step, len(train_loader), running_loss, 'train')
        
    network.eval() 
    with torch.no_grad():
        
        for step in range(1,len(test_loader)+1):
            
            batch = next(iter(train_loader))
            images, landmarks = batch['image'], batch['landmarks']
            images = images.permute(0,3,1,2)
            images = images.cuda()
            landmarks = landmarks.view(landmarks.size(0),-1).cuda()
        
            predictions = network(images)

            # find the loss for the current step
            loss_test_step = criterion(predictions, landmarks)

            loss_test += loss_test_step.item()
            running_loss = loss_test/step

            print_overwrite(step, len(test_loader), running_loss, 'Validation')
    
    loss_train /= len(train_loader)
    loss_test /= len(test_loader)
    
    print('\n--------------------------------------------------')
    print('Epoch: {}  Train Loss: {:.4f} Valid Loss: {:.4f}'.format(epoch, loss_train, loss_test))
    print('--------------------------------------------------')
    
    if loss_test < loss_min:
        loss_min = loss_test
        torch.save(network.state_dict(), '../moth_landmarks.pth') 
        print("\nMinimum Valid Loss of {:.4f} at epoch {}/{}".format(loss_min, epoch, num_epochs))
        print('Model Saved\n')
     
print('Training Complete')
print("Total Elapsed Time : {} s".format(time.time()-start_time))

and I get the following result for one epoch:

size of train loader is:  90
predictions are:  tensor([[-0.1240, -0.1808, -0.2515, -0.1952, -0.2210, -0.0983, -0.2281,  0.0049],
        [-0.1408, -0.2126, -0.2820, -0.2684, -0.1838, -0.1007, -0.2736, -0.0101],
        [-0.1157, -0.2006, -0.2649, -0.2322, -0.2235, -0.1550, -0.2466,  0.0029],
        [-0.1365, -0.2064, -0.2617, -0.2820, -0.2069, -0.0940, -0.2331, -0.0061],
        [-0.0789, -0.2084, -0.2746, -0.2532, -0.2057, -0.0729, -0.2304,  0.0059],
        [-0.1729, -0.2225, -0.2642, -0.2592, -0.1818, -0.1400, -0.2861, -0.0068],
        [-0.1711, -0.2341, -0.2459, -0.2593, -0.2033, -0.0645, -0.2249, -0.0179],
        [-0.1397, -0.2168, -0.2554, -0.2688, -0.1928, -0.0323, -0.2625,  0.0265]],
       device='cuda:0', grad_fn=<AddmmBackward>)
landmarks are:  tensor([[0.6597, 0.3325, 0.9314, 0.4105, 0.4401, 0.4334, 0.5707, 0.4406],
        [0.6662, 0.3314, 0.7672, 0.1670, 0.4253, 0.2032, 0.6366, 0.4590],
        [0.6693, 0.3365, 0.9246, 0.4509, 0.4280, 0.4172, 0.6218, 0.4576],
        [0.6617, 0.3236, 0.9421, 0.4347, 0.4280, 0.3755, 0.5989, 0.4468],
        [0.6549, 0.3249, 0.9408, 0.3244, 0.3970, 0.3190, 0.5713, 0.4147],
        [0.6649, 0.3304, 0.8466, 0.2261, 0.3917, 0.2894, 0.6662, 0.4388],
        [0.6677, 0.3244, 0.9551, 0.3732, 0.3786, 0.3775, 0.6460, 0.4124],
        [0.6725, 0.3332, 0.9529, 0.3903, 0.4522, 0.4320, 0.6581, 0.4240]],
       device='cuda:0')
loss_train_step before backward:  tensor(0.5270, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(0.5270, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  0.5269910097122192
step:  1
running loss:  0.5269910097122192
Train Steps: 1/90  Loss: 0.5270 predictions are:  tensor([[ 0.0726, -0.0202,  0.0980, -0.0746,  0.0344,  0.0454, -0.0194,  0.1157],
        [ 0.0655, -0.0531,  0.0659, -0.1021,  0.0001,  0.0213, -0.0158,  0.1084],
        [ 0.0638, -0.0443,  0.0428, -0.1006, -0.0364,  0.0365, -0.0607,  0.1060],
        [ 0.0188, -0.0712,  0.0311, -0.1251, -0.0448,  0.0120, -0.0597,  0.0611],
        [ 0.1286, -0.0592,  0.0563, -0.0872,  0.0017,  0.0563, -0.0240,  0.1095],
        [ 0.0882, -0.1077,  0.0509, -0.1139,  0.0134,  0.0415, -0.0563,  0.1066],
        [ 0.0850, -0.0224,  0.0958, -0.0640,  0.0305,  0.0387, -0.0557,  0.1140],
        [ 0.1260, -0.0629,  0.0767, -0.0713,  0.0251,  0.0520, -0.0157,  0.0794]],
       device='cuda:0', grad_fn=<AddmmBackward>)
landmarks are:  tensor([[0.6599, 0.3325, 0.9300, 0.4307, 0.4980, 0.4253, 0.5592, 0.4422],
        [0.6730, 0.3241, 0.9179, 0.4374, 0.4347, 0.4132, 0.6810, 0.4051],
        [   nan,    nan, 0.7744, 0.1895, 0.4347, 0.1655, 0.5532, 0.4563],
        [0.6756, 0.3249, 0.8314, 0.1680, 0.4729, 0.1804, 0.6929, 0.4272],
        [0.6549, 0.3249, 0.9408, 0.3244, 0.3970, 0.3190, 0.5713, 0.4147],
        [0.6748, 0.3306, 0.9381, 0.2490, 0.4738, 0.1830, 0.6729, 0.4199],
        [0.6570, 0.3229, 0.9421, 0.4145, 0.4118, 0.3836, 0.6070, 0.4105],
        [0.6572, 0.3253, 0.9408, 0.3957, 0.4401, 0.3661, 0.5459, 0.4449]],
       device='cuda:0')
loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  2
running loss:  nan
Train Steps: 2/90  Loss: nan predictions are:  tensor([[nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan]], device='cuda:0',
       grad_fn=<AddmmBackward>)
landmarks are:  tensor([[0.6678, 0.3282, 0.8484, 0.1760, 0.4917, 0.1393, 0.6411, 0.4424],
        [0.6597, 0.3325, 0.9314, 0.4105, 0.4401, 0.4334, 0.5707, 0.4406],
        [0.6652, 0.3240, 0.9583, 0.3630, 0.4190, 0.4536, 0.6431, 0.3940],
        [0.6718, 0.3371, 0.9071, 0.4630, 0.5195, 0.3634, 0.6057, 0.4791],
        [0.6675, 0.3225, 0.9173, 0.3975, 0.4266, 0.3809, 0.6474, 0.4388],
        [0.6822, 0.3355, 0.9583, 0.2234, 0.5370, 0.2315, 0.7991, 0.4474],
        [0.6614, 0.3215, 0.9314, 0.3943, 0.4213, 0.3486, 0.5723, 0.4321],
        [0.6597, 0.3283, 0.9206, 0.4495, 0.5020, 0.4152, 0.5459, 0.4370]],
       device='cuda:0')
loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan
step:  3
running loss:  nan
Train Steps: 3/90  Loss: nan predictions are:  tensor([[nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan],
        [nan, nan, nan, nan, nan, nan, nan, nan]], device='cuda:0',
       grad_fn=<AddmmBackward>)
landmarks are:  tensor([[0.6702, 0.3308, 0.8008, 0.1830, 0.4145, 0.2301, 0.6447, 0.4240],
        [0.6672, 0.3303, 0.8425, 0.2019, 0.4522, 0.2005, 0.6447, 0.4576],
        [0.6579, 0.3217, 0.9421, 0.3984, 0.4980, 0.3997, 0.6006, 0.4583],
        [0.6600, 0.3281, 0.9408, 0.4240, 0.4643, 0.3822, 0.5632, 0.4743],
        [0.6595, 0.3238, 0.9314, 0.3688, 0.3943, 0.3149, 0.6030, 0.4495],
        [0.6730, 0.3224, 0.9838, 0.3499, 0.4051, 0.4213, 0.7531, 0.4296],
        [0.6756, 0.3233, 0.9798, 0.3055, 0.4724, 0.2530, 0.7599, 0.4315],
        [0.6507, 0.3248, 0.7416, 0.1602, 0.4065, 0.2005, 0.5900, 0.4147]],
       device='cuda:0')
loss_train_step before backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train_step after backward:  tensor(nan, device='cuda:0', grad_fn=<MseLossBackward>)
loss_train:  nan

I am only showing 4 steps here.

As seen in the below quote, you have a nan in your landmarks tensor, which is resulting in loss being nan.

1 Like

thanks a lot for pointing that out. should I zero out all my nans or what people do in this situation when they wanna do a backward pass? is there a utility from torch that could care of this?

Firstly, a good idea might be to debug why you’re getting nans in your landmarks tensor.

Secondly, there might be an issue with the way normalizing is being done. Since landmarks are (x,y) pairs on an image, it might not be suitable to divide by the max landmark value to normalize.

As I mentioned in the below reply:

The idea is to do something on the lines of:

  • For each (x,y) pair of landmarks, find out the image it belongs to
  • Find the image’s width and height
  • Divide the 1st coordinate of the landmark, by height - so you’re normalizing with respect to height
  • Divide the 2nd coordinate of the landmark, by width, so you’re normalizing this with respect to width.

Doing this is analogous to finding the answer to the question:

if the height and width of my image were 1, then where would my landmarks be in the image?

1 Like

I did as you suggested and I still get nan loss

here is the complete log https://pastebin.com/raw/tgz5CB2F

You’re using a csv file to load your data. Open the csv file and make sure none of the values have quotes around them(which turns them into a string and yields nan in an NN). When you open your csv file in a spreadsheet, make sure you check the box to detect complex math values(or whatever your spreadsheet editor calls it). For example, 3.0 E-5 will get converted to a string and saved as such if you do not.

1 Like