Backpropagation through a module without updating its parameters

I would like to forward a tensor through a module. The gradients should backpropagate through the module to its input without updating the parameters of the module.
The parameters of a module can be frozen by setting requires_grad = False for each parameter of the module, but I would like to allow other calculations to update these parameters concurrently.
How can it be implemented?