Quantize convolution layer

# torchao/quantization/quant_api.py
def quantize_(
    model: torch.nn.Module,
    config: AOBaseConfig,
    filter_fn: Optional[Callable[[torch.nn.Module, str], bool]] = _is_linear,
    device: Optional[torch.types.Device] = None,
):

Currently, default filter_fn is _is_linear only filter nn.Linear. Can we filter
nn.Conv to quantize Convolution layer?

yes, example: ao/test/quantization/quantize_/workflows/float8/test_float8_tensor.py at bcd5dbc0cedcd2a330bb1b55bdbfb8625273cf23 · pytorch/ao · GitHub