NaN coming after the first iteration during training

The loss function is L1 loss.
Now I will make some assumptions to make my question better understood.
My model is f(x) like and the x is the input of my model, there will be nothing wrong when I training f(x) individually, but when I try to make multiplication ( x * f(x) ) in my model, the NaN will occur after one iteration.
And I also tried to use torch.autograd.set_detect_anomaly(True), I found all my data become NaN in my model.

Thanks in advance.

The most likely case is
A) You have training data with NaN’s (due to issues in preprocessing)
B) You have some operation that produces a NaN during inference.

Decrease your learning rate significantly at least to start. After some training, you should be able to increase it a bit. NaN usually happens when there are very large gradients, causing unstable training.

Hello,
Can you please explain this in more.