Function for Tensor without including it in AutoGrad

Let’s say I have n layered neural network. After running l layers, I want to apply some transformation to the l^th layer output, without including that transformation in backpropagation.

For e.g. :

output_layer_n = self.LinearLayer(output_layer_prev)
#apply some transformation to output_layer_n 
# but don't want to take autograd w.r.t. this transformation, basically this transformation function doesn't have any parameter 
output_layer_n.data = TransformationFunction(output_layer_n.data) 

So how should I go about implementing it? What I want is not to take gradient accounted for TransformationFunction() in my code.

You can simply wrap this function like this:

with torch.no_grad():
    output_layer_n.copy_(TransformationFunction(output_layer_n))

Note that you need to copy into it.
If you replace the python variable output_layer_n, then you will get a new tensor that does not require gradients and that will be unrelated to the old output_layer_n (since you did not track ops inside the no_grad() block.