I have a situation that looks something like this:
z = f(x_1, ..., x_n, ,y) w = g(z) z.backward(all_trainable_variables_except_y()) w.backward(y)
I can’t detach
y because I need it for
In Tensorflow, I would create the variables under different scopes and then do something like
optimizer.minimize(z, grad_vars=tf.trainable_variables(x_scope)) optimizer.minimize(w, grad_vars=tf.trainable_variables(y_scope))
How would I handle this in torch?