What do [De]QuantStub actually do?
|
|
8
|
5483
|
September 21, 2023
|
The ONNX exported by my QAT quantization training does not have a fake operator, the code is as follows?
|
|
4
|
468
|
September 19, 2023
|
RFC-0019 Extending PyTorch Quantization to Custom Backends
|
|
2
|
277
|
September 19, 2023
|
Quantizion partially applied to a PyTorch model
|
|
1
|
417
|
September 15, 2023
|
LSTM Quantization
|
|
1
|
330
|
September 14, 2023
|
Can the output of operator QuantizedConv2d is fp32?
|
|
4
|
316
|
September 12, 2023
|
Questions about preparing QAT model
|
|
6
|
609
|
September 12, 2023
|
Can we use int8 activation quantization in pytorch
|
|
3
|
720
|
September 1, 2023
|
Cannot import name 'QuantStub' from 'torch.ao.quantization'
|
|
5
|
8968
|
August 31, 2023
|
Expected INT8 Accuracies on ImageNet-1K (ResNet QAT)
|
|
2
|
296
|
August 28, 2023
|
PyTorch Dynamic Quantization clarification
|
|
6
|
727
|
August 25, 2023
|
How to quantize a model with both CNN and LSTM
|
|
6
|
1341
|
August 22, 2023
|
Qnnpack using activation dtype int8 is not runnable
|
|
16
|
628
|
August 22, 2023
|
RuntimeError: Could not run 'quantized::conv2d_relu.new' with arguments from the 'CPU' backend
|
|
1
|
3952
|
December 17, 2020
|
Could not run 'aten::_slow_conv2d_forward' with arguments from the 'QuantizedCPU' backend
|
|
1
|
315
|
August 14, 2023
|
How to implement fp16 quantization on CPU
|
|
2
|
826
|
August 8, 2023
|
Using quantizable model for normal training
|
|
5
|
440
|
July 31, 2023
|
Problem in symbolically trace (torch.fx) nn.GRUCell/LSTMCell
|
|
11
|
536
|
July 29, 2023
|
Share qparams in pt2e
|
|
2
|
355
|
July 28, 2023
|
Quantization parameters in QuantizedConv2d
|
|
8
|
2269
|
July 28, 2023
|
How to use torch.ceil strategy when process PTSQ
|
|
4
|
411
|
July 26, 2023
|
Quantization accuracy debugging for custom LSTM PTQ
|
|
2
|
307
|
July 25, 2023
|
Quantizing an existing object detector with ResNet backbone
|
|
3
|
764
|
July 25, 2023
|
Custom LSTM PTSQ QconfigMapping
|
|
8
|
535
|
July 21, 2023
|
Quantization error about _conv_transpose2d
|
|
2
|
332
|
July 21, 2023
|
How can I disable quantization for specific layers
|
|
2
|
308
|
July 21, 2023
|
Could not run 'aten::quantize_per_tensor'
|
|
2
|
558
|
July 17, 2023
|
What is the difference between pytorch-quantization and torch.ao.quantization
|
|
5
|
1528
|
July 11, 2023
|
Custom LSTM statis quantization not working
|
|
7
|
549
|
July 11, 2023
|
Various quantized/quantizable/intrinsic modules purpose
|
|
2
|
364
|
July 11, 2023
|