How to save and load lr_scheduler stats in pytorch?

Thank you very much, I did save it the way you mentioned, but for resuming I used :
scheduler.load_state_dict(checkpoint['scheduler'])
and thats all, no need to change anything else.
So basically this is how the resume section is defined now :

 # optionally resume from a checkpoint
  if resume:
    if os.path.isfile(resume):
      print_log("=> loading checkpoint '{}'".format(resume), log)
      checkpoint = torch.load(resume)
      recorder = checkpoint['recorder']
      start_epoch = checkpoint['epoch']
      scheduler.load_state_dict(checkpoint['scheduler'])
      net.load_state_dict(checkpoint['state_dict'])
      optimizer.load_state_dict(checkpoint['optimizer'])
      print_log("=> loaded checkpoint '{}' (epoch {})" .format(resume, checkpoint['epoch']), log)
    else:
      print_log("=> no checkpoint found at '{}'".format(resume), log)
  else:
    print_log("=> did not use any checkpoint for {} model".format(arch), log)
16 Likes