Creating network with fixed weights in higher layers similar to intial layer

Hello,

I want to define network with multiple layers where few of the final layers have fixed weights (not trained) but they are essentially a copy of some intial layers.
e.g.,

input->layer1->activation->layer2->activation->layer3->activation->layer4(fixed same weights as layer2)-> output

i.e., in each run I want to update the weights of layer 1,2,3 but not 4. Layer 4 automatically copies weights from layer1 after update step.

self.hidden4.weight = self.hidden2.weight
def forward(self, x):
x = self.hidden1(x)
x = F.relu(x)
x = self.hidden2(x)
x = F.relu(x)
x = self.hidden3(x)
x = F.relu(x)
x = self.hidden4(x)
x = F.relu(x)
x = self.output(x)
return F.log_softmax(x)

the above procedure doesn’t seems to work as the computational graph ignores the path when layer2 is used second time.

Any help how to accomplish this in Pytorch?