Hi, I was using cuda.amp.autocast to save memory during training.
But if I use checkpoint in the middle of the network forward pass,
x = checkpoint.checkpoint(self.layer2, x)
feat = checkpoint.checkpoint(self.layer3, x)
the error comes out like below.
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
Is it not possible to use both cuda amp and checkpoint?