Loss function returning NaN Loss

I have been trying to build a simple linear regression model with the neural network with 4 features and one output. The loss function used is mse loss. It is returning loss as Nan. Learning rate is 1e-3.
I trying tuning the lr but I didn’t see any change in it. Would appreciate your help in the same.
Thanks in advance!

Training Loop

for epoch in range(epochs):
    for inputs,targets in training_data:
        loss.backward() #computing gradients
        opt.step() #updating parameters
        opt.zero_grad() #setting the grads back to zero
    if epoch%10==0:
        print(f'Epoch:{epoch}/{epochs} | Loss:{loss.item()}')


Epoch:0/100 | Loss:nan
Epoch:10/100 | Loss:nan
Epoch:20/100 | Loss:nan
Epoch:30/100 | Loss:nan
Epoch:40/100 | Loss:nan
Epoch:50/100 | Loss:nan
Epoch:60/100 | Loss:nan
Epoch:70/100 | Loss:nan
Epoch:80/100 | Loss:nan
Epoch:90/100 | Loss:nan


You are using MSE then the only reason that can cause this is either your model definition is wrong or your targets are not correct numbers.

The issue is not related to learning rate or other hyperparamters as your first iteration is also producing nan. Simply test your network with a random input and see if it’s producing a proper output and also verify your targets.