What causes "skipping cudagraphs due to input mutation"?

This message torch._inductor.utils: [WARNING] skipping cudagraphs due to input mutation appeared when I tried to train resnet50 with CIFAR100. I set a fixed batchsize 500, which can divide 50000. To be honest, I don’t clearly understand what ‘input mutation’ means here. I just thought if the shape of the input tensor is fixed then it’s ok.

I would assume the same and you should check when this warning is raised.
E.g. often the last batch of the DataLoader might be smaller, which could already be seen as a mutation of the computation graph.

Thanks, but the shape of my input tensors is fixed. I checked my code and found this warning appeared when I tried to calculate the accuracy of my model (at the first epoch). Is this just because I set with torch.no_grad() ?