Segfault in libtorch_cpu running fastai

I tried installing pytorch via conda locally (outside the docker container) and I’m seeing the same thing, yeah. Is there a way to configure/compile pytorch to not use those newer instructions?

I admit the CPU itself is a bit old (1st gen core i7, Nehalem)…I repurposed my old game machine and swapped out the GPU for something recent and powerful. I had hoped the CPU wouldn’t matter that much if the GPU was recent.

Thanks for your help.