I’m not sure how reusing parameters in different training runs would work.
In the end, each training loop would update the model parameters, and they would diverge, so I don’t think it’s possible.
If I understand the docs correctly, the variable_scope would create new variables using the specified names (scopes).
If you set reuse=True, TF would check, if a variable with this name was already created and return it instead of creating a new one (or raising an error?).
with tf.compat.v1.variable_scope("foo"):
v = tf.compat.v1.get_variable("v", [1])
with tf.compat.v1.variable_scope("foo", reuse=True):
v1 = tf.compat.v1.get_variable("v", [1])
assert v1 == v
In PyTorch you don’t define tensor creations in a graph with scopes etc. and can directly create the tensor and reuse it: