Gradients disabled globally?

Hi, so I have this very basic and unrealistic example:

a = torch.linspace(-1,1,100).reshape(-1,1)
b = a

c = torch.nn.Sequential(
    nn.Linear(1,10),
    nn.ReLU(),
    nn.Linear(10,1)
)

opt = torch.optim.Adam(c.parameters(), lr=1e-3)

for i in range(10):
  opt.zero_grad()
  b_hat = c(a)
  loss = F.mse_loss(b_hat,b)
  loss.backward()
  opt.step()

So sure, it’s not a realistic training, however it should nevertheless work (in the sense that there shouldn’t be any error). Yet I have this:

RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn

Which is caused by the line loss.backward().

I cannot understand why this happens. If any one can help, I’d be gratefull.

For context, I run this on colab. I ran the exact same code on my terminal and it worked. I ran a similar code in colab, and it worked. Why would the gradients be disabled globally in this peculiar environment ?

Your code works fine for me so you might indeed have disabled gradient calculation in the failing script e.g. via torch.set_grad_enabled(False).

Thank you very much for your answer. I thought about it and I was planning on putting a torch.set_grad_enabled(True) (because nowhere in my code did I ever disable the grad except in torch.no_grad() blocks, but this error happened outside these blocks).

I just re-ran all the code and now magically it works !

However I stumbled upon this :

<ipython-input-11-46a46f132a51>:1: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)

This error came while running the following function

def infer(x):
  for layer in m: # m is a torch.nn.ModuleList
    x = layer(x)
  return x

infer(x).grad
>> None

It’s just a warning but however I don’t quite understand why it appears.