For example, there are 32 nodes in the fc layer.
How do I apply a different activation function to the last two nodes?
You could slice the output and apply the desired activations separately:
out = linear(input)
out1 = F.relu(out[:, :2])
out2 = torch.sigmoid(out[:, 2:])
out = torch.cat((out1, out2), dim=1) # concatenate back
Would that work for you?