I’d like to comment or ask about this issue that I realized yesterday. I was migrating a big script into a Class version of it, but by doing this I forgot to add that when training, the input tensor has to require grad (I usually do this by using inputTensor.requires_grad_()).
Despite this, the model managed to converge to the same error as it would converge with grad enabled on that input.
My question is then, why should I use requires_grad = True in the input vectors if by doing so it won’t change anything? Maybe I’m missing something?.
I checked if requires_grad was true by printing it. Obviously I make loss.backward(), optimizer.step() and optimizer.zero_grad().
Sorry if my question (and my english) is very basic. Thanks!