Gradient propagation with torch.where

I want to use “z=torch.where(x > 0.1, x, 0)” in my customize activation function. Should it differentiable such that loss will backpropagate? If not what should be the alternative?