NotImplementedError: Could not run ‘torchvision::nms’ with arguments from the ‘CUDA’ backend

i installed the torchvision from source and i get this error
command - torchvision.ops.nms(boxes, scores, iou_thres)
Error - NotImplementedError: Could not run ‘torchvision::nms’ with arguments from the ‘CUDA’ backend.
if I install it via pip, then there is no error, but I don’t have access to VideoReader from torchvision
command - torchvision.set_video_backend(“video_reader”)
Error - RuntimeError: video_reader video backend is not available. Please compile torchvision from source and try again

I want to use VideoReader and not get errors “backend is not available”
in both cases I work with cuda and it is always active. (I work with yolo7)

I know that I can solve this problem if I transfer the data to the cpu. But it takes a long time and VideoReader becomes useless

pip freeze:

when installing from torchvision source i set FORCE_CUDA=1 and after that got build error.
OSError: CUDA_HOME environment variable is not set. Please set it to your CUDA install root.

In my case, cuda was downloaded to docker (tritonserver). I don’t understand what to do in this case.

Did you try to set the CUDA_HOME env variable as suggested in the error message for the source build?

thank you very much for your reply, I really appreciate it! the problem is that I don’t know what path to point to the CUDA_HOME variable if cuda is in docker. Usually they indicate the path “/usr/local/cuda”, but this option does not suit me. If you use the installation via pip, then there is no such problem.

CUDA_HOME should point to the locally installed CUDA toolkit, which is needed for a source build with GPU support. Yes, the pip wheels don’t need a locally installed CUDA toolkit as we bundle all dependencies in them.