How to share weights between two nets?

In torch, we can share parameters between two nets like this,

local r1 = nn.Recurrent(
        opt.embeddingSize,
        cnn,
        h2h,
        nn.Identity(),
        opt.sampleSeqLength)
local r2 = nn.Recurrent(
        opt.embeddingSize,
        cnn:clone('weight','bias','gradWeight','gradBias'),
        h2h:clone('weight','bias','gradWeight','gradBias'),
        nn.Identity(),
        opt.sampleSeqLength)

so, how to share weights between two nets in pytorch?

I think you can refer to this topic:

you can simply use the same nets on different outputs. In Torch, each net is tied to a particular output, in PyTorch it is not.

Thanks a lot! I got it.

Hi, can you explain it ? Thanks

we can implement weight sharing among the innermost layers by simply reusing the same Module multiple times when defining the forward pass. please see https://pytorch.org/tutorials/beginner/examples_nn/dynamic_net.html#pytorch-control-flow-weight-sharing

Yes, you’re right,this is how pytorch creates model with sharing weight.

Do you use torch instead of pytorch?

No, I use pytorch instead of torch.