Using torch.autograder.grad on a multi-GPU setup

Hey all, Does torch.autograd.grad have issues with a multi GPU setup I am getting this error below.

grads = torch.autograd.grad(loss, parameters, allow_unused=allow_unused)

File “/opt/conda/envs/pytorch/lib/python3.10/site-packages/torch/autograd/init.py”, line 394, in grad
result = Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass
RuntimeError: One of the differentiated Tensors appears to not have been used in the graph. Set allow_unused=True if this is the desired behavior.