I have an L2 normalisation in my forward calculation,doubt regarding what happens during backward?

inp=self.encode(inp)
inp=(7**0.5)*(inp/inp.norm(p=2,dim=-1)[:,None])
inp=self.decode

So here’s my understanding of how it should work, so your’e multiplying an input by some scalar value so during the backward even the gradients should be multiplied by the same value right? Does the autograd take care of that or should I write my own forward and backward pass? by defining it as a new layer?

Thanks