I have a model that contains a custom permutation that I need to apply to the second dimension of a Tensor, this is implemented as
def forward(self, x: torch.Tensor):
return x[:, self.permutation]
where self.permutation
is a LongTensor.
When the model is not quantized (x is a FloatTensor) everything works correctly, when I quantize the model I get the following error:
RuntimeError: Could not run 'aten::empty.memory_format' with arguments from the 'QuantizedCPU' backend. 'aten::empty.memory_format' is only available for these backends: [CPU, CUDA, MkldnnCPU, SparseCPU, SparseCUDA, BackendSelect, Autograd, Profiler, Tracer]
It seems that the operation is not implemented, I’m using PyTorch 1.6.0.
Is there any alternative permutation operation that I can use?
Thanks,
Matteo