Adding Input layer after a hidden layer

I’m pretty new to this and I’m not sure if/how this would be possible. The architecture I’m planning to use for my network feeds a set of inputs to a hidden layer and then have another input meet the outputs of the hidden layer and have both feed into a second hidden layer.
Is there a way for me to do this or do I need to have a workaround where I use two networks?

Here’s a picture of the architecture:

You could slice the input tensor, use the first part for the linear layer, and concatenate the result with the second part. Here is a small example:

class MyModel(nn.Module):
    def __init__(self):
        super(MyModel, self).__init__()
        self.fc1 = nn.Linear(8, 8)
        self.fc2 = nn.Linear(12, 12)
        self.fc3 = nn.Linear(12, 20)
        
    def forward(self, x):
        # Use first part of x
        x1 = F.relu(self.fc1(x[:, :8]))
        # Concatenate the result of the first part with the second part of x
        x = torch.cat((x1, x[:, 8:]), dim=1)
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

model = MyModel()    
x = torch.randn(1, 12)
output = model(x)