I’m the developer of a plugin to the VFX compositing software Nuke. We are using libtorch internally in the plugin to run model inferences. libtorch and all relevant dependencies are statically linked into our dynamic library binary. It’s all working well, both using CUDA or CPU.
But, we’ve encountered problems when it’s run on older CPUs that do not support the AVX and AVX2 instruction sets. It’s instantly crashing.
I have read this page: pytorch/README.md at master · pytorch/pytorch · GitHub
It says that PyTorch should dynamically choose the correct CPU path (in this case the non-AVX/AVX2 one) during runtime if the following environment variable is set: ATEN_CPU_CAPABILITY=default
However, we’re not seeing any difference in behaviour when we’ve set it, i.e. it’s still crashing.
Is there a limitation to this approach here, i.e. that it’s not working since libtorch is statically linked to my binary. Or is there some other problem? Should it just be working when the environment variable above is defined?
All the best,