When to call torch,cuda.synchronize()?

Dear All,

I have a strange problem with a code snippet which looks like the one below

# model doesn't contain any trainable parameters
def comp(model, x):
    x.requires_grad_()
    y = model(x)
    y.backward()
    g = x.grad
    return x.detach_(), y.detach_(), g

def run(model, x):
    x, y, g = comp(model, x)

    while True:
        some plain compute with x, y, and g, no backward() calls
        torch.cuda.synchronize()
        call some function that computes svd
        x.copy_(some z)
        x.grad.zero_()
        x, y, g = comp(model, x)

The code runs correctly on the cpu; however, in order to get the correct result on the gpu I have to insert the torch.cuda.synchronize() call. It also works correctly on the gpu if the sync is replaced by a python sleep call. Could you suggest what problem it may indicate?
Thanks!

Hi,

If you only use pytorch’s API, you should never need to call synchronize yourself.
Can you share a small code sample (that we can run on colab) that shows the bad result?