Is it possible to change tensor's learning rate without changing it's value?

Currently I work on a problem that need to change batch sample weight, if the weights are added to loss value, one doesn’t need to worry about output value, so the problem can be solved by using current method.

But I wanna change the inner nodes of network, so the output is a big matter, is there a possible way to define such mechanism so that the grad of nodes will be multiply by weight but without changing the output.

Please let me know, thanks advance.

Looks like a quick math can solve my problem, just added the value diff between the old and new can restore the original value and ensure grad being multiplied.