RuntimeError: Could not run ‘aten::mm’ with arguments from the ‘QuantizedCPU’ backend. ‘aten::mm’ is only available for these backends: [CPU, CUDA, SparseCPU, SparseCUDA, BackendSelect, Named, AutogradOther, AutogradCPU, AutogradCUDA, AutogradXLA, AutogradPrivateUse1, AutogradPrivate Use2, AutogradPrivateUse3, Tracer, Autocast, Batched, VmapMode].
h0 = torch.matmul(input, self.W)
i was trying to apply quantization to a graph convolutional neural network and during evaluation of the network this operation was not permitted. what options do i have to do torch.matmul as a quantizable operation.