Remove gradient from torch.mul

Say a and b are two tensors with gradient. Is there a difference between the following two methods in defining a new tensor from multiplication if I don’t need a gradient for c? Does the second method save

  1. c = torch.mul(a, b).detach()
  2. c = torch.zeros_like(a, requires_grad=False)
    c = torch.addcmul(c, a, b)