Pruning removes everything

I used torch.utils.prune.*_unstructured on a saved pre-trained model to trim weight but it removed everything.

Model:

FullyConnected(
  (net): Sequential(
    (0): Linear(in_features=72, out_features=1024, bias=True)
    (1): ReLU(inplace=True)
    (2): Dropout(p=0.2, inplace=False)
    (3): Linear(in_features=1024, out_features=1024, bias=True)
    (4): ReLU(inplace=True)
    (5): Linear(in_features=1024, out_features=23106, bias=True)
  )
)

Result:

Before:
Linear(in_features=1024, out_features=23106, bias=True) => odict_keys(['weight', 'bias'])
[('weight', Parameter containing:
tensor([[-3.4171e-03,  7.4451e-03, -3.2716e-03,  ...,  4.3759e-03,
         -4.4775e-03,  4.9997e-04],
        [-9.5993e-03,  1.8929e-03, -9.2918e-04,  ...,  5.8639e-03,
         -2.0274e-03, -1.9497e-05],
        [ 6.6112e-05,  5.2332e-03, -5.2656e-03,  ..., -1.0969e-03,
         -6.8079e-03,  1.7691e-04],
        ...,
        [ 1.7492e-02,  1.6093e-03, -4.0479e-03,  ...,  4.3377e-05,
         -8.4499e-04,  2.9180e-05],
        [-8.2002e-03,  7.4289e-03,  6.5624e-03,  ..., -9.8359e-04,
          2.0093e-03,  3.9837e-05],
        [-8.1944e-03,  8.7170e-03,  1.1283e-02,  ...,  1.4373e-03,
         -1.2853e-03,  5.4475e-04]], requires_grad=True)), ('bias', Parameter containing:
tensor([ 0.0008, -0.0027,  0.0017,  ..., -0.0014,  0.0007,  0.0022],
       requires_grad=True))]

Pruning module...
Deleting artifacts...

After:
Linear(in_features=1024, out_features=23106, bias=True) => odict_keys(['bias'])
[('bias', Parameter containing:
tensor([ 0.0008, -0.0027,  0.0017,  ..., -0.0014,  0.0007,  0.0022],
       requires_grad=True))]

I found out that the weight mask is all 1, but why?

tensor([[1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.],
        ...,
        [1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.],
        [1., 1., 1.,  ..., 1., 1., 1.]])