How does masked_lm_labels work

The BertForMaskedLM model in the Bert model has a parameter masked_lm_labels and I want to know what he expresses and how can we get it
This is it’s introduction
masked_lm_labels (torch.LongTensor of shape (batch_size, sequence_length), optional, defaults to None) – Labels for computing the masked language modeling loss. Indices should be in [-100, 0, …, config.vocab_size] (see input_ids docstring) Tokens with indices set to -100 are ignored (masked), the loss is only computed for the tokens with labels in [0, …, config.vocab_size]