How to make sure gradient is correctly back-propagated after modify some intermedia tensor

I am going to use a pretrained model which is trained on imagenet with bgr order and subtracted imagenet means’ input.
So I do this in forward:
x = input.clone()
bgr = torch.split(x,1,1)
bgr = torch.cat( (bgr[2].sub_(103.939),bgr[1].sub_(116.779),bgr[0].sub_(123.68)),1)
…(bgr)
Will this operation cause problems?
And where can I learn some knowledge about this issue?
Thanks!