How to save grads of parameters in checkpoint for following model training?

a = m.weight.grad.data.clone()
AttributeError: ‘NoneType’ object has no attribute ‘data’

Your gradients were cleared or never calculated.
What is your use case and could you provide a bit more information?

I use checkpoint to save the information of trained model, but when I load model, the grad of parameter is none. How to maintain grad in checkpoint ?

print("Best accuracy: ", best_prec1)
save_checkpoint({
‘epoch’: epoch + 1,
‘state_dict’: model.state_dict(keep_vars=True),
‘best_prec1’: best_prec1,
‘optimizer’: optimizer.state_dict(),
}, is_best, filepath=args.save)

if args.model:
if os.path.isfile(args.model):
print("=> loading checkpoint ‘{}’".format(args.model))
checkpoint = torch.load(args.model)
args.start_epoch = checkpoint[‘epoch’]
best_prec1 = checkpoint[‘best_prec1’]
model.load_state_dict(checkpoint[‘state_dict’])
print("=> loaded checkpoint ‘{}’ (epoch {}) Prec1: {:f}"
.format(args.model, checkpoint[‘epoch’], best_prec1))
else:
print("=> no checkpoint found at ‘{}’".format(args.resume))

You could probably store the gradients manually, as they won’t be saved using the state_dict method.

but I want to use parameter at same time, how to store the gradients in checkpoint manually for following deal?