How to reuse model variables?

I want to duplicate multiple models and train each with a different dataloader.

Simply cloning the model is expected to waste that much memory.

So I’m looking for a way to use memory efficiently.

I know that TensorFlow has an option called reuse like variable_scope(reuse=True).

Is there any way I can use it similarly in Pytorch??

Thank you.

I’m not sure how reusing parameters in different training runs would work.
In the end, each training loop would update the model parameters, and they would diverge, so I don’t think it’s possible.

Thank you for answer.

Then I will change the question a little bit.

Is there a function in Pytorch that functions similar to tensorflow variable_scope reuse option?

If I understand the docs correctly, the variable_scope would create new variables using the specified names (scopes).
If you set reuse=True, TF would check, if a variable with this name was already created and return it instead of creating a new one (or raising an error?).

with tf.compat.v1.variable_scope("foo"):
    v = tf.compat.v1.get_variable("v", [1])
with tf.compat.v1.variable_scope("foo", reuse=True):
    v1 = tf.compat.v1.get_variable("v", [1])
assert v1 == v

In PyTorch you don’t define tensor creations in a graph with scopes etc. and can directly create the tensor and reuse it:

x = torch.randn(1)
y = x + 1
# reuse x
z = x + 2
1 Like

Thank you for answer!

By the way, be care about inplace operator such as nn.ReLU(replace=True), div_, mul_:wink:

I’m sorry, but I’ll ask one more question.

In the example above, is the memory of reused x shared?

Yes, x is the same tensor, so the memory and all its attributes will be used.
The outputs y and z are of course different tensors.