How to approach quantizing torch.matmul/aten::bmm?

In the meantime I’ve discovered that yes, but now I’m trying to compile Pytorch with that flag enabled and can’t to get the development docker image built. Made a new thread about that: Can't build Pytorch using the Dockerfile from the repo - #3 by yannbane