How to sinc params between 2 modules?

So, i want to make self.submodule1 to refer at same weights as self.submodule0
And both modules have save variable names.
How to perform it?

class SomeSubmodule(nn.Module):
	def __init__(self):
		super().__init__()
		self.layers = torch.nn.ModuleList([])
		for _ in range(5):
			self.layers.append(torch.nn.Linear(512, 512))

	def forward(self, x):
		for layer in self.layers:
			x = layer(x)
		return x

class AnotherSubmodule(nn.Module):
	def __init__(self):
		super().__init__()
		self.layers = torch.nn.ModuleList([])
		for _ in range(5):
			self.layers.append(torch.nn.Linear(512, 512))

	def forward(self, x):
		for layer in self.layers:
			x = layer(x)
		return x

class Net(nn.Module):
	def __init__(self):
		super().__init__()

		self.submodule0 = SomeSubmodule()
		self.submodule1 = AnotherSubmodule()

	def forward(self, x):
		# bla bla bla
		return x

You could initialize the linear layers in Net (or outside of it) and pass them to SomeSubmodule and AnotherSubmodule.
However, since these submodules only iterate these linear layers you might also just be able to reuse the linear layers directly in Net.

Well, its not what easy.
My real example looks like this

self.transformer_decoder = TransformerDecoder(
	[TransformerDecoderLayer(
		self_attention=AttentionLayer(FullAttention(), hidden, nhead),
		cross_attention=AttentionLayer(FullAttention(), hidden, nhead), d_model=hidden) for _ in range(1)])

and

self.transformer_decoder = RecurrentTransformerDecoder(
	[RecurrentTransformerDecoderLayer(
		self_attention=RecurrentAttentionLayer(RecurrentFullAttention(), hidden, nhead),
		cross_attention=RecurrentAttentionLayer(RecurrentCrossFullAttention(), hidden, nhead), d_model=hidden) for _ in range(1)])

I don’t want to change layers’ definitions.
But it has the same variables names, just different math.
So I guess I can init both modules, get dict name-param of first, iterate the second module, and replace its params with params of the first one.
But not sure if its the best way.