'MSELossBackward0' returned nan values in its 0th output

I have some difficulties with this error.
It happens during the first epoch after 62 batch_idx. When I debug my code, it says avg_cost becomes nan just after batch_idx is 62.

Here is my code

def train_model(model, train_df, num_epochs = None, lr = None, verbose = 20, patience = 10):
    criterion = nn.MSELoss(reduction='sum')
    optimizer = optim.Adam(model.parameters(), lr = learning_rate)
    nb_epochs = num_epochs

    train_hist = np.zeros(nb_epochs)

    for epoch in range(nb_epochs):
        avg_cost = 0
        total_batch = len(train_df) 

        for batch_idx, samples in enumerate(train_df):
            x_train, y_train = samples

            model.reset_hidden_state()

            # H(x)
            outputs = model(x_train)

            # cost
            loss = criterion(outputs, y_train)

            optimizer.zero_grad()
            loss.backward()
            optimizer.step()

            avg_cost += loss / total_batch

        train_hist[epoch] = avg_cost

        torch.autograd.set_detect_anomaly(True)

        if epoch % verbose == 0:
            print('Epoch: ', '%03d' % (epoch), 'train loss : ', '{: .4f}'.format(avg_cost))

        if (epoch % patience == 0) & (epoch != 0):
            
            if train_hist[epoch-patience] < train_hist[epoch]:
                print('\nEarly Stopped')
                break

    return model.eval(), train_hist

I think something is wrong with normalization, (I used MinMaxScalar() function to normalize my input data) but I don’t know the real cause of error.

How do you suggest me to delete my error?