Lasagne to pytorch porting

I am trying to port my model from lasagne to pytorch. It has conv layers with tied weights. It looks like the only way to share weights in pytorch is to use functional conv (torch.nn.functional) and input weights as a parameter which shares both weights and bias. Also, the pytorch version was not able to produce results similar to lasagne. Can anyone suggest possible reasons for this?

you can also simply use the same conv layer again and again on different inputs (instead of trying to create two separate “tied-weight” conv layers).

Thanks for the reply. Using the same conv layer for different inputs would not only use the same weights but also same bias. To share only the weights, is disabling bias in conv layers and adding an additional linear layer with learnable bias the right way? Also, is batch norm implementation in pytorch different when compared to lasagne?