The gradient problem about backward

We have customized a complex activation function, such as

y = relu(0, x / sigma)

The x is the output of a conv layer and the sigma‘s value is related to the weight of the conv layer
in backward process,we need to update the weight of the conv layer.Theoretically,we should calculate the Partial derivative of x and sigma.Is the pytorch can do it automatically?
Should we define the derivative formula?If we should do it,where I should define?

pytorch can do this automatically and out of the box.