Running older code, but newer CUDA version

I’m trying to get this project running: GitHub - hanish3464/WORD-pytorch: Webtoon Object Recognition and Detection(WORD)

It has these requirements:

torch==1.0.0
torchvision==0.2.1

Which if I understand correctly is quite old. But I don’t know torch/python well enough to upgrade the code of the project to the latest versions. So I’m stuck with trying to get them to work.

I initially was able to get this to work by running things on the CPU, with a few lines of codes changed.

But recently I installed (upgraded?) the latest CUDA version (12-3). And even though I’m using the CPU, it just stopped working.

So I switched to trying to get it to work on the GPU. And that doesn’t work either, apparently because my CUDA version is much more recent than torch 1.0.0.

I get this error (even trying to run on the CPU):

╰─⠠⠵ python3 demo.py --cls 0.90 --type white --demo_folder /ram/panel-detect/bwzjgrproyg//data/ --cpu on master|✚42
Loading weights from checkpoint : (./weights/Speech-Bubble-Detector.pth)
Loading weights from checkpoint : (./weights/Line-Text-Detector.pth)
TEST IMAGE (1/1): INPUT PATH:[/ram/panel-detect/bwzjgrproyg//data/page.jpg]
THCudaCheck FAIL file=/pytorch/aten/src/THC/THCGeneral.cpp line=405 error=8 : invalid device function
Traceback (most recent call last):
File “demo.py”, line 113, in
params=f_RCNN_param, cls=args.cls, bg=args.type)
File “/home/arthur/dev/ai/WORD-pytorch/object_detection/bubble.py”, line 39, in test_net
rois_label = model(im_data, im_info, gt_boxes, num_boxes) # predict
File “/home/arthur/.anaconda3/envs/WORD/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “/home/arthur/dev/ai/WORD-pytorch/object_detection/lib/model/faster_rcnn/faster_rcnn.py”, line 52, in forward
base_feat = self.RCNN_base(im_data)
File “/home/arthur/.anaconda3/envs/WORD/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “/home/arthur/.anaconda3/envs/WORD/lib/python3.6/site-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/home/arthur/.anaconda3/envs/WORD/lib/python3.6/site-packages/torch/nn/modules/module.py”, line 489, in call
result = self.forward(*input, **kwargs)
File “/home/arthur/.anaconda3/envs/WORD/lib/python3.6/site-packages/torch/nn/modules/activation.py”, line 50, in forward
return F.threshold(input, self.threshold, self.value, self.inplace)
File “/home/arthur/.anaconda3/envs/WORD/lib/python3.6/site-packages/torch/nn/functional.py”, line 838, in threshold
result = VF.threshold(input, threshold, value)
RuntimeError: CUDA error: no kernel image is available for execution on the device

How could I possibly work around this?

I can’t downgrade CUDA, I need CUDA 12 for the 4 other projects that require it.
But this project / torch 1.0.0 won’t work with CUDA 12.
So I’m stuck, what can I do?

Is there some way to have multiple versions of CUDA around?
Could I upgrade the project to the latest version of torch ( I tried for a few hours, and just hit my head on issues I have zero understanding of, and for which there isn’t really much help on Google. Even chatgpt was of little help…)

Is there something I’m missing / have not considered?

Thanks for any possible help.

This doesn’t make sense as the locally installed CUDA toolkit won’t even be used if you’ve installed PyTorch binaries with CUDA support, besides the obvious point that CUDA does not influence pure CPU execution.

Based on the error:

Your installed PyTorch binary does not ship code for your GPU. This could be the case if you are using any Ampere+ device, as these were released after PyTorch 1.0 came out.