How to write scale and zero_point to fp32 tensor without doing quantization?

In PyTorch1.3, float-tensor has scale and zero_point. Could I set value to scale and zero_point to a float32 tensor without coverting fp32 tensor to quant?

May I know the reason for doing this?

In PyTorch1.3, float-tensor has scale and zero_point. Could I set value to scale and zero_point to a float32 tensor without coverting fp32 tensor to quant?

q_scale and q_zero_point exist in FP tensor, but they do not make sense outside of the quantization context. That’s why, if you try accessing those methods on a FP tensor, you are supposed to get an error:

>>> import torch
>>> x = torch.tensor([1, 2, 3])
>>> x.q_scale()
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-13-a850e3e1a3e9> in <module>
----> 1 x.q_scale()

RuntimeError: Could not run 'aten::q_scale' with arguments from the 'CPUTensorId' backend. 'aten::q_scale' is only available for these backends: [QuantizedCPUTensorId, VariableTensorId].

>>> x.q_zero_point()
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-15-6cbfbea33d3d> in <module>
----> 1 x.q_zero_point()

RuntimeError: Could not run 'aten::q_zero_point' with arguments from the 'CPUTensorId' backend. 'aten::q_zero_point' is only available for these backends: [QuantizedCPUTensorId, VariableTensorId].

As @dskhudia asked, is there a specific reason you would want to do that? Maybe there is another way of accomplish what you are trying to do

Thanks for your reply. It is because our operation quantization compiler has too much rules.

I don’t think I can suggest an alternative, as I don’t know what rules you are referring to. Generally, to answer your question, we don’t allow setting scale and zero_point for a float tensor, and frankly can’t even imagine a use case :slight_smile:.

are you saying you want to have floating point zero_point? I think we’ll have quantizer support for that