'gelu_backward_out_cuda' is not a member of 'at:native' in PyTorch 1.10

Hi PyTorch team,

After upgrading to PyTorch 1.10, I can not call the C++ API for gelu backward function. I do find that the API for gelu backward has been changed from gelu_backward_cuda into gelu_backward_out_cuda . However, I still met the following error:

error: ‘gelu_backward_out_cuda’ is not a member of ‘at::native’

A similar error can also be observed for softmax_backward_cuda_out . Are there any solutions for that? or any examples for using C++ API for calling gelu and softmax backward function in PyTorch 1.10?

Kind regards.

Are you calling into at::native::foo? This has never been supported, but only using at::foo. at::native is an implementation detail. (And even the internal uses have been switched to at:: if I remember correctly.)

Best regards

Thomas

@tom Hi Thomas, thanks for your reply!

I am using at::native::. For example,

at::native::gelu_backward_out_cuda(grad_output, input);

However, it will result in an error with PyTorch 1.10 as mentioned before. But with PyTorch 1.9 and lower versions, I can call at::native::gelu_backward_cuda(grad_output, input); successfully.

Kind regards.

Yes, but that this worked was by accident more than by design. at::native is an internal namespace of PyTorch and the official API is at::… without native. Does using it without native work for you?

Best regards

Thomas

Hi @tom, thanks for your quick reply. Unfortunately, it doesn’t work for me to use at:: gelu_backward_out_cuda. However, I just found that at::gelu_backward and at::_softmax_backward_data work for me. Thank you for the very important tips.

Cheers.

1 Like

Glad you solved it! There should be an out variant, but cuda/cpu is done by the dispatcher, likely.

1 Like