How to Change Weights to only -1, 0, 1 for Quantization

Hi there, I am writing a simple feed forward nn. I’m new to PyTorch, so please forgive me if this is a trivial question. I am trying to see how quantizing the weights will affect the performance. After optimizer.step(), is there anyway to manually inspect each weight element and round it to either -1, 0, or 1 depending on what it is closer to?

I have tried this:

optimizer.step()
for p in list(model.parameters()):
     if hasattr(p,'org'):
         p.org.copy_(p.data.clamp_(-1,0,1))

Am I on the right track at all? Thank you so much for your help