Let’s say I have
n layered neural network. After running
l layers, I want to apply some transformation to the
l^th layer output, without including that transformation in backpropagation.
For e.g. :
output_layer_n = self.LinearLayer(output_layer_prev) #apply some transformation to output_layer_n # but don't want to take autograd w.r.t. this transformation, basically this transformation function doesn't have any parameter output_layer_n.data = TransformationFunction(output_layer_n.data)
So how should I go about implementing it? What I want is not to take gradient accounted for TransformationFunction() in my code.