Gradient propagation through MaskedTensor creation

Is it possible to propagate gradients though the creation of a Masked Tensor, something like

import torch
from torch.masked import masked_tensor

x = torch.tensor([-10., -5, 0, 5, 10, 50, 60, 70, 80, 90, 100], requires_grad=True)
mask = x < 0
mx = masked_tensor(x, mask)
mx.sum().backward()

I want to mask the outputs of a neural network, my current workaround is to use NANs, but MaskedTensor seem to be the nicer solution if backpropagation is possible.