Suppose my model consists of three parts:
h = f1(x) h' = f2(h) y = f3(h')
f1 is a pre-trained model (I want to fix it during training),
f2 is not differentiable and doesn’t have any learnable parameters, and
f3 is the only part that I want to learn.
When I run
autograd try to backprop gradients all the way back to
x? How can I ensure that backprop stops at
I know how to do that in torch but I am new with pytorch. Thanks in advance!