How does deep learning cope with missing values?

When performing LSTM time series prediction task, why the input dataset one has missing values nan, the results obtained by the model for each block are all nan tensor, is there any way to make the model ignore these missing values when learning?

Models use mathematical operations in each layer, which are often e.g. matrix multiplications. If any value of your input contains invalid values, these operations would output these values and could create result tensors full of NaNs. You would have to remove these invalid input samples e.g. by replacing them with zeros or any value which would not forward the NaNs.

model_input[torch.isnan(model_input)|torch.isinf(model_input)]=0.0

The above code line will automatically change NaN and inf values to zeroes.