What is nn.Identity() used for?

@ptrblck it means its best use is when we connect previous layer output to the output of new layer? For example in residual connection sometime we connect output of previous layer to output of new layer. If this is nn.Identity() is designed for then we can achieve the same as mentioned in my following code.

I think i need to read about nn.Identity() more , maybe it will take time to me to understand its underline mechanism

However , could you please confirm following code has skip connection res and res1 which i used to avoid gradient vanishing, i joined output of layer1 (res) and output of layer 2 (res1) with output of layer6 F.relu(torch.cat([x,res,res1],1)) .Is this is sort of way we join layers by skipping one to another in order to avoid gradient vanishing?

class multiNetA(nn.Module):
    def __init__(self, num_feature, num_class):
        super().__init__()

        self.lin1 = nn.Linear(num_feature,50)
        self.lin2 = nn.Linear(50, 30)
        self.lin6 = nn.Linear(30, 20)
        self.lin7 = nn.Linear(100, num_class)

        self.bn0 = nn.BatchNorm1d(num_feature)
        self.bn1 = nn.BatchNorm1d(50)
        self.bn2 = nn.BatchNorm1d(30)
        self.bn6 = nn.BatchNorm1d(20)
        self.bn7 = nn.BatchNorm1d(9)

    def forward(self, x):
        x = self.bn0(x)
        x = self.lin1(x) 
        x = self.bn1(x) 
        x = F.relu(x)
        res = x <---- skip connection
        
        
        x = self.lin2(x) 
        x = self.bn2(x) 
        x = F.relu(x)
        
        res1 = x <--- another skip connection
      

        x = self.lin6(x)
        x = self.bn6(x) 
        x = F.relu(torch.cat([x,res,res1],1)) <-- joined res and res1 (input) with output of layer 6
        x = self.lin7(x) # output layer
        
        return x