Getting crashes when running on older (non-AVX) CPUs

Hi,
I’m the developer of a plugin to the VFX compositing software Nuke. We are using libtorch internally in the plugin to run model inferences. libtorch and all relevant dependencies are statically linked into our dynamic library binary. It’s all working well, both using CUDA or CPU.
But, we’ve encountered problems when it’s run on older CPUs that do not support the AVX and AVX2 instruction sets. It’s instantly crashing.
I have read this page: pytorch/README.md at master · pytorch/pytorch · GitHub
It says that PyTorch should dynamically choose the correct CPU path (in this case the non-AVX/AVX2 one) during runtime if the following environment variable is set: ATEN_CPU_CAPABILITY=default
However, we’re not seeing any difference in behaviour when we’ve set it, i.e. it’s still crashing.

Is there a limitation to this approach here, i.e. that it’s not working since libtorch is statically linked to my binary. Or is there some other problem? Should it just be working when the environment variable above is defined?

All the best,
David

Anybody? Somebody got to have some insight into this matter. :slight_smile:
Maybe you can help me by pointing to the right person @ptrblck, you’ve been so helpful in the past. :+1:

I would assume that a non-vectorized code path should be taken instead of a failure, so please feel free to create a GitHub issue for your use case.
In the meantime, you might want to disable AVX for a special build e.g. as described in this post for the non-AVX CPU workstations.