I have some custom operations that needs to be configured using environment variables. It’s easy to do that in the forward pass, but much harder to do it in the backward, because the orchestration happens in the C code.
Using myfunction.register_backward_hook(hook), the hook is called after the output gradient are computed, so it don’t work. I didn’t something similar to register_forward_pre_hook for the backward pass.
I can’t use custom torch.autograd.Function because Cudnn is not called inside it (https://github.com/pytorch/pytorch/issues/26537).
How can I set my environment variables in the backward pass?