Hello,

I try to access scale and zero_point from the weight within QuantizedLinearReLU by using methods `scale`

and `q_scale`

. The results are failed as shown in below:

```
model.fc1
Out[7]: QuantizedLinearReLU(in_features=4, out_features=4, scale=0.04960649833083153, zero_point=0, qscheme=torch.per_channel_affine)
model.fc1.weight().scale
Out[8]: AttributeError: 'Tensor' object has no attribute 'scale'
model.fc1.weight().q_scale()
Out[9]: RuntimeError: Expected quantizer->qscheme() == kPerTensorAffine to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
```

The scale and zero_point that I want to access are **zero_point=tensor([0, 0, 0, 0]), axis=0 and scale=tensor([0.0145, 0.0016, 0.0132, 0.0124], dtype=torch.float64)** as below:

```
model.fc1.weight()
Out[9]:
tensor([[-0.1880, -1.3018, 0.8100, 1.8369],
[-0.0033, 0.1172, -0.0065, -0.2083],
[-0.9236, 0.3299, -1.6889, -1.3195],
[-0.2718, 0.4078, 1.0997, 1.5693]], size=(4, 4), dtype=torch.qint8,
quantization_scheme=torch.per_channel_affine,
scale=tensor([0.0145, 0.0016, 0.0132, 0.0124], dtype=torch.float64),
zero_point=tensor([0, 0, 0, 0]), axis=0)
```

Is there any way to access these parameters without manually copy from the prompt? Thank you.