CUDA error about stream

When I’m trying to bind the pytorch model to cuda tensor, I encountered this problem: RuntimeError: CUDA error: operation would make the legacy stream depend on a capturing blocking stream,and the code cause this error is simply like model.to(device), has someone met the same problem? How can I deal with it?

I’m not sure what “trying to bind the pytorch model to cuda tensor” means, so could you explain your use case a bit more, please?

my codes are as follows:

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = torchvision.models.resnet34(pretrained=True).to(device)

and the call of to(..) method causes the problem, will it be much more clear?

Thanks for the update! The error sounds like a setup issue, as it seems to be created during the first CUDA operation.
Try to restart your machine (especially after e.g. a driver update) and rerun the code.
If it’s still failing, reinstall the driver as well as the PyTorch binaries, and recheck it again.

Before doing these steps, you could also try to run your workload in a docker container and check, if this CUDA setup would work.

Thanks a lot! The problem has been solved by reinstalling PyTorch with a previous version.