I’m trying to add some functionality to the Convolutional code but I only want it to run if AMP is enabled. I attempted to use ATen/autocast_mode.h::is_enabled() but this flag doesn’t change whether I use a model with autocasting or not. It seems to always return false. Is there a way to query if autocasting is enabled from within the C++ code in the ATen sub component?
aten::is_autocast_enabled() should work in libtorch.
Do you happen to know what the include statement is for that? i got this error
pytorch/aten/src/ATen/native/miopen/Conv_miopen.cpp:767:39: error: ‘aten’ has not been declared
767 | std::cout << "autocastEnabled: " << aten::is_autocast_enabled() << std::endl;
To clarify, I am trying to modify code within pytorch.
Do you happen to know what the include statement is for that? i got this error:
pytorch/aten/src/ATen/native/miopen/Conv_miopen.cpp:768:45: error: ‘is_autocast_enabled’ is not a member of ‘c10::aten’
768 | std::cout << "autocastEnabled: " << aten::is_autocast_enabled() << std::endl;
To clarify,I am trying to modify code within pytorch.
I might be wrong and the posted method could be JIT-specific. Could you check
at::autocast::is_enabled() instead (I’m not in front of my workstation as I would confirm it otherwise myself).
I tried that as well. While it did return an actual value, that value was false whether I used a model the with autocasting or not.