Train ResNet50 on pytorch 1.8


I want to train this repository: pytorch-cifar100/ at master · weiaicunzai/pytorch-cifar100 · GitHub which is some years old.

I am getting this message during training:

python -net resnet50 -b 64 > runs/performance/a100_128_b64_resnet50.log
/home/user/.conda/envs/a100/lib/python3.8/site-packages/torch/jit/ UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See for more informations.
  if a.grad is not None:
/home/user/.conda/envs/a100/lib/python3.8/site-packages/torch/optim/ UserWarning: The epoch parameter in `scheduler.step()` was not necessary and is being deprecated where possible. Please use `scheduler.step()` to step the scheduler. During the deprecation, if epoch is different from None, the closed form is used instead of the new chainable form, where available. Please open an issue if you are unable to replicate your use case:
  warnings.warn(EPOCH_DEPRECATION_WARNING, UserWarning)

Do I need to care about these messages? If yes, how to fix them?

Both warnings mention a proper fix, so did you try to apply it and was it not working?
E.g. the first one mentions that the .grad attribute of a non-leaf tensor can be accessed after calling .retain_grad() on this tensor.

I tried to apply and it seems to work… but I am not sure, if this is good letting as it is…