Loss.backward() error when using M1 GPU (mps device)

I want to train a Seq2Seq model with M1 GPU. The source code runs well with cpu. However, if I change the device from cpu to mps:

device = torch.device("mps")

RuntimeError occurs when training model:

RuntimeError: Expected a proper Tensor but got None (or an undefined Tensor in C++) for argument #0 'grad_y'

6 Likes

Hi, have you got the sulotion?

I am having this problem too, and cannot find any information on how to solve it.