Simulating int1 QAT with {-1,1} instead of {0,1}

I am trying to simulate the int1 training and configuring the QConfig manually to do that using the quant_min and quant_max range. However, for int1 training, I want my values to be {-1,1} instead of {0,1}. Is there anyway to do that? Also, since torch does not go natively below qint8 and quint4, converting values into booleans will be a faster and efficient way for storing the int1 representations?

Thanks for your answers in advance!

For adding to core bit packing/unpacking utils, please chime in [feature request] np.packbits / np.unpackbits, general BitTensors (maybe can be just tensors with dtype torch.bits8 or have a new dtype torch.bits introduced) and bit packed tensors utilities for saving memory / accesses, support for BitTensors wherever BoolTensors are used · Issue #32867 · pytorch/pytorch · GitHub to add info about your usecase :slight_smile:

I think the general sentiment is that a pull request is needed