Mismatched number of neurons in classification layer and number of classes

class NeuralNetwork(nn.Module):
    def __init__(self):
    super().__init__() 
    self.conv1 = nn.Conv2d(1, 20, stride = 1, kernel_size = 5) # Makes 20 maps of 24x24
    self.pool = nn.MaxPool2d(2, 2)                             # Makes 20 maps of 12x12
    self.fc1 = nn.Linear(20 * 12 * 12, 100) #  <---                
    
def forward(self, x):
    x = self.pool(torch.sigmoid(self.conv1(x)))
    x = torch.flatten(x, 1) # flatten all dimensions except batch
    x = self.fc1(x)        
    return x

I’m trying to train this network with MNSIT dataset and I do not get any tensor size mismatch error!
In the last layer, I have 100 neurons instead of having 10! In my case, accuracy was 1% worse with 100 output neurons instead of 10, I also see that this issue was listed. Anyway, my concern is that why was the size mismatch not flagged ? How is autograd dealing with this situation ? Any insights that will shed light on this will be appreciated.