does torch.load or torch.jit.load need nvidia drivers?
the model was created using CUDA, however, we are trying to run inference on a cpu only machine.
everything is set to use the cpu in the inference code, however we are getting a runtime error for no nvidia drivers on the torch.load call.
Any way to get around this?
Are you already using the
map_location argument to load the data to the host? If not the load command would try to load the data to the device it was stored from, which might be the GPU.
Ya. we tried many torch.load(…, map_location=…) that we have seen on here. Get a Runtime Error for no Nvidia Drivers for all we tried.
The model was saved to disk via a GPU train, and we are trying to load it on a CPU only machine.