|
About the quantization category
|
|
0
|
2535
|
October 2, 2019
|
|
Ost Training Quantization fails on SPAN model with type_as
|
|
1
|
24
|
December 4, 2025
|
|
Difference of IntxWeightOnlyConfig/UIntxWeightOnlyConfig/Int8WeightOnlyConfig/Int4WeightOnlyConfig/
|
|
2
|
17
|
December 4, 2025
|
|
Quantize convolution layer
|
|
1
|
15
|
December 4, 2025
|
|
PT2E quantization doesn't reduce the model size
|
|
2
|
51
|
December 4, 2025
|
|
How to use quantized weights for manual implementation of the model in FPGA?
|
|
2
|
1064
|
September 28, 2025
|
|
[pt2e][quant] Quantization of operators with multiple outputs (RNN, LSTM)
|
|
4
|
274
|
September 15, 2025
|
|
GPU MEM% allocation vs batch size and temporal dimension
|
|
3
|
70
|
September 13, 2025
|
|
TorchAO Migration
|
|
0
|
61
|
September 11, 2025
|
|
Does export support quantized models with torchAo
|
|
1
|
55
|
September 11, 2025
|
|
Should I perform quantization after activation functions like sigmoid and SiLU?
|
|
0
|
49
|
September 9, 2025
|
|
Quantization of Hybrid Pytorch Model
|
|
0
|
37
|
September 8, 2025
|
|
Error while converting quantized Torch model to ONNX
|
|
0
|
49
|
September 5, 2025
|
|
My model is taking too much time in calculating FFT to find top k
|
|
1
|
47
|
September 2, 2025
|
|
FX mode static_quantization for YOLOv7
|
|
16
|
963
|
August 4, 2025
|
|
Could not run 'aten::quantize_per_tensor' with arguments from the 'QuantizedCPU' backend
|
|
7
|
4230
|
July 17, 2025
|
|
RuntimeError: quantized::conv2d_prepack() is missing value for argument 'stride'
|
|
1
|
57
|
July 1, 2025
|
|
Why is there such a significant difference between floating-point convolution and quantized integer convolution results?
|
|
2
|
60
|
June 30, 2025
|
|
[MPS] When device='mps', aten.linear.default op is not decomposed
|
|
1
|
56
|
June 5, 2025
|
|
Logits mismatch between PyTorch inference and manual implementation
|
|
1
|
90
|
April 29, 2025
|
|
QAT model drops accuracy after converting with torch.ao.quantization.convert
|
|
1
|
85
|
April 29, 2025
|
|
Qint8 Activations in PyTorch
|
|
1
|
192
|
April 25, 2025
|
|
How to do qat after ptq in PyTorch2 quantization?
|
|
1
|
136
|
April 25, 2025
|
|
Switch loss function causes "RuntimeError: Found dtype Double but expected Float"
|
|
6
|
1850
|
April 24, 2025
|
|
How to quantize my torchscript to fp8
|
|
1
|
309
|
April 18, 2025
|
|
Loss stuck at quantization aware training for 16bits
|
|
1
|
55
|
April 18, 2025
|
|
Question about quantized model save & load
|
|
5
|
206
|
April 18, 2025
|
|
Quantization method diff between fake quant and true quant
|
|
1
|
75
|
April 14, 2025
|
|
Right way to insert QuantStub and DeQuantStub in eager mode quantization
|
|
6
|
177
|
April 12, 2025
|
|
QAT model is not performing as expected when compared to the original model
|
|
7
|
168
|
April 9, 2025
|