Why no up limit of tensor dimensions for nn.Linear?

When I run the following code with input of extra dimensions, it reports error:
RuntimeError: Expected 4-dimensional input for 4-dimensional weight [64, 16, 3, 3], but got 5-dimensional input of size [8, 7, 16, 80, 80] instead

import torch
import torch.nn as nn

class ConvInputTest (nn.Module):

    def __init__(self):
        super(ConvInputTest, self).__init__()

        self.conv = nn.Conv2d(
            in_channels=16,
            out_channels=64,
            kernel_size=3,
            padding=1,
            stride=1)
        
    def forward(self, x):

        return self.conv(x)
    
net = ConvInputTest()
x = torch.randn(8, 7, 16, 80, 80)
y = net(x)

However, when I test a similar case using nn.Linear, the code can run smoothly:

import torch
import torch.nn as nn

class LinearInputTest (nn.Module):

    def __init__(self):
        super(LinearInputTest, self).__init__()

        self.linear = nn.Linear(
            in_features=48,
            out_features=96)
        
    def forward(self, x):

        return self.linear(x)
    

net = LinearInputTest()
x = torch.randn(8, 7, 16, 80, 48)
y = net(x)
print(y.size())

Why does nn.Conv2d restrict number of dimensions to 4 (or nn.Conv3d to 5, or nn.Conv1d to 3), but nn.Linear does not have such limitations?

When you run a tensor through a Linear layer, assuming the final dim matches the input size, it just matmuls the last two dims and treats the other dims like separate batches.