How can I avoid initializing parameters in Pytorch?

How do I avoid having my linear layers be initialized when I load my model? E.g, the following stacktrace shows that when creating a linear layer, self.reset_parameters() is called.

Given that I am immediately going to load weights from a checkpoint, I would like to avoid initializing the weights. Ideally, they’d just be filled with garbage memory.

 File "layers.py", line 91, in __init__
    self.w2 = nn.Linear(hidden_size, in_features, bias=False)
  File "/tmp/env/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 104, in __init__
    self.reset_parameters()
  File "/tmp/env/lib/python3.10/site-packages/torch/nn/modules/linear.py", line 110, in reset_parameters
    init.kaiming_uniform_(self.weight, a=math.sqrt(5))
  File "/tmp/env/lib/python3.10/site-packages/torch/nn/init.py", line 460, in kaiming_uniform_
    return tensor.uniform_(-bound, bound, generator=generator)

You can use the skip_init method as described here.