torch tries to preload cuda deps via the cuda pip packages, but this preload logic doesn’t completely work when running a true hermetic python environment such as one used by bazel.
Someone tried to contribute the fixes but the first PR was closed as stale Load cuda deps more aggressively by keith · Pull Request #137059 · pytorch/pytorch · GitHub and the second PR hasn’t really gotten much traction Load cuda deps more aggressively by keith · Pull Request #165648 · pytorch/pytorch · GitHub
Can we get someone from the team help look at this and merge this in so that users of pytorch in bazel projects don’t have to manually patch torch?