Hi,

So first of a simple pre-question to the main:

`In train mode (with a model with dropouts, BN and e.t.c) is it true that if y = model(x), then model((x,x))=(y,y') and y=y' with all other identity matters (like in-variance to dropouts, backward path and e.t.c)`

**So to the main question:**

I have the following scheme of forward and the light-blue and blue forward paths which are part of one forward path of my GAIN model (see Guided-Inferrence-Attention-Network for more details, which are not necessary for my question):

Pay attention that the second input image is derivable through the first blue path.

Now I want that the magnitude of the gradients of the light-blue path will be 1/10 of its original and the regular blue will be as usual 1.

So Iāll make the following:

(y,yā) = model.forward((x,x)) # y=yā from the above assumption on the pre-question

y,yā = y,yā.detach()

light_blue_loss = loss_fn(yā, _) # _ wildcard for something, irrelevant

light_blue_loss.backward(gradient=-0.9)

total_loss = loss_fn(y, _) # _ wildcard for something, irrelevant

total_loss.backward() # total_loss.backward(gradient=1) default

Now, is this true that for the light-blue path on the detached input according to the scheme on top, the gradients were accumulated with a constant multiplying them by -0.9, thus after accumulating gradients with the total_loss backward with default gradient parameter 1, it can be said the light-blue path was influenced with gradients of magnitude 0.1 (1+(-0.9)) and the whole other path with magnitude 1 of the gradients ?

Iāll be grateful for your answer, thank you.