About weight sharing between modules

I am working on a multi-agent reinforcement learning which each agent has its own q-network.However, the network cosists of several layers that share parameters among agents. I look up a lot of discussion about weights sharing and get confused now. I wonder if the code below can work? Thanks.

PS: the embedding layer and attention layer are the layers I want to share weights among agents.

    def __init__(self, layer1, layer2, input_dim, output_dim):
        super().__init__()
        self.layer1 = layer1
        self.layer2 = layer2
        self.linear1 = nn.Linear(input_dim, output_dim)
        self.relu = nn.ReLU()

    def forward(self, x, neighbors):
        x_1 = self.layer1(x)
        neighbors_1 = []
        for neighbor in neighbors:
            neighbors_1.append(self.layer1(neighbor))
        out = self.layer2(x_1, neighbors_1)
        out = self.linear1(out)
        out = self.relu(out)
        return out

Weights are being shared here. The layer defined in __init__() as self.layer1 is a single layer through which you are passing various tensors from the tensor list neighbors.

Thanks for your answers. I will have a try:)