About the quantization category
|
|
0
|
1559
|
October 2, 2019
|
Empty_strided not supported on quantized tensors
|
|
4
|
99
|
June 9, 2023
|
What is the sense of parameter qconfig_spec in quantize_dynamic
|
|
2
|
26
|
June 9, 2023
|
Questions about preparing QAT model
|
|
4
|
63
|
June 9, 2023
|
Is there a way to set PyTorch's quantization parameters so that it can be converted to a TensorRT model?
|
|
1
|
13
|
June 9, 2023
|
RuntimeError: quantized::conv(FBGEMM): Expected activation data type QUInt8 but got QInt8
|
|
2
|
143
|
June 9, 2023
|
Convert quantized model to ONNX format
|
|
2
|
45
|
June 6, 2023
|
Problem in symbolically trace (torch.fx) nn.GRUCell/LSTMCell
|
|
3
|
47
|
June 1, 2023
|
Slow Inference time on Quantized Faster RCNN model
|
|
6
|
650
|
May 31, 2023
|
How to quantize only specific layers
|
|
3
|
913
|
May 31, 2023
|
How to specify input and output types
|
|
5
|
89
|
May 31, 2023
|
Quantization configuration
|
|
3
|
61
|
May 30, 2023
|
Quantized model profiling
|
|
1
|
62
|
May 26, 2023
|
PTSQ for model with layers and tensor ops
|
|
1
|
55
|
May 22, 2023
|
What is the difference between pytorch-quantization and torch.ao.quantization
|
|
3
|
115
|
May 22, 2023
|
Integer convolution on GPU
|
|
9
|
175
|
May 18, 2023
|
What is different in torch.ao.quantization.get_default_qat_qconfig and torch.ao.quantization.get_default_qconfig?
|
|
1
|
70
|
May 17, 2023
|
Quantized conv1d cannot execute on arm cpu
|
|
7
|
104
|
May 16, 2023
|
RuntimeError: promoteTypes with quantized numbers is not handled yet
|
|
3
|
833
|
May 15, 2023
|
Where is the quantized param saved?
|
|
2
|
54
|
May 15, 2023
|
Relationship between GPU Memory Usage and Batch Size
|
|
2
|
1828
|
May 11, 2023
|
Eager Quantization: How to pass int to a quantized model?
|
|
2
|
81
|
May 10, 2023
|
Torch 2.0 compile not compatible with FX Graph Mode Quantization?
|
|
4
|
157
|
May 10, 2023
|
Questions about Equalization
|
|
4
|
98
|
April 27, 2023
|
Load_state_dict drops the data type
|
|
4
|
508
|
April 27, 2023
|
In PTQ, is possible to quantize activation in per channel mode?
|
|
1
|
95
|
April 27, 2023
|
Place minmaxobserver on activation layer
|
|
4
|
114
|
April 26, 2023
|
RuntimeError: x86 is not a valid value for quantized engine
|
|
1
|
131
|
April 24, 2023
|
Creating a custom layer and using torch.qat for it
|
|
4
|
626
|
April 24, 2023
|
Error in running quantised model RuntimeError: Could not run 'quantized::conv2d.new' with arguments from the 'CPU' backend
|
|
4
|
1084
|
April 24, 2023
|