Size mismatch error while calculating loss in FCN

Hello all, This might be a quite trivial issue but I am not able to find the error. I am experimenting with TCNs and instead of using fully connected at the output, I am trying to make it a fully Convolutional network. I am not able to understand what shape should be passed to the loss function. The model class is the following -


class Chomp1d(nn.Module):
    def __init__(self, chomp_size):
        super(Chomp1d, self).__init__()
        self.chomp_size = chomp_size

    def forward(self, x):
        return x[:, :, :-self.chomp_size].contiguous()


class TemporalBlock(nn.Module):
    def __init__(self, n_inputs, n_outputs, kernel_size, stride, dilation, padding, dropout=0.2):
        super(TemporalBlock, self).__init__()
        self.conv1 = weight_norm(nn.Conv1d(n_inputs, n_outputs, kernel_size,
                                           stride=stride, padding=padding, dilation=dilation))
        self.chomp1 = Chomp1d(padding)
        self.relu1 = nn.ReLU()
        self.dropout1 = nn.Dropout(dropout)

        self.conv2 = weight_norm(nn.Conv1d(n_outputs, n_outputs, kernel_size,
                                           stride=stride, padding=padding, dilation=dilation))
        self.chomp2 = Chomp1d(padding)
        self.relu2 = nn.ReLU()
        self.dropout2 = nn.Dropout(dropout)

        self.net = nn.Sequential(self.conv1, self.chomp1, self.relu1, self.dropout1,
                                 self.conv2, self.chomp2, self.relu2, self.dropout2)
        self.downsample = nn.Conv1d(n_inputs, n_outputs, 1) if n_inputs != n_outputs else None
        self.relu = nn.ReLU()
        self.init_weights()

    def init_weights(self):
        self.conv1.weight.data.normal_(0, 0.01)
        self.conv2.weight.data.normal_(0, 0.01)
        if self.downsample is not None:
            self.downsample.weight.data.normal_(0, 0.01)

    def forward(self, x):
        out = self.net(x)
        res = x if self.downsample is None else self.downsample(x)
        return self.relu(out + res)


class TemporalConvNet(nn.Module):
    def __init__(self, num_inputs, num_channels, kernel_size=2, dropout=0.2):
        super(TemporalConvNet, self).__init__()
        layers = []
        num_levels = len(num_channels)
        for i in range(num_levels):
            dilation_size = 2 ** i
            in_channels = num_inputs if i == 0 else num_channels[i-1]
            out_channels = num_channels[i]
            layers += [TemporalBlock(in_channels, out_channels, kernel_size, stride=1, dilation=dilation_size,
                                     padding=(kernel_size-1) * dilation_size, dropout=dropout)]

        self.feature_map = nn.Conv1d(22, 512, 1)
        self.network = nn.Sequential(*layers)
        self.classify = nn.Conv1d(in_channels=512, out_channels=4, kernel_size=1)

    def forward(self, x):
        out = self.network(x)
        out1 = self.feature_map(x)
        out = nn.functional.relu(out+out1)
        out = self.classify(out)
        return out

Input shape is [22, 512] #channels, length It is a time series data and the 5 is the batch size and I have 4 output classes.
The error I get while calculating loss is
Expected target size (5, 313), got torch.Size([5])
TIA

If you are dealing with a multi-class classification use case, your target should contain the class index for each sample, which seems to be the case.
However, the output of your model seems still to have an additional dimension (probably the sequence length), which needs to be reduced.
You could apply an average pooling layer or any other operation, which creates an output of shape [batch_size, nb_classes].

@ptrblck even after pooling wouldn’t the size be [batch, classes,1] whereas the loss function would be expecting [batch, classes]
I’m not able to understand how to remove that last dimension altogether

Tried pooling and the output size is now [5,4,1] while calculating the loss I get the error target size expected to be [5,1] got [5 ]

You can just squeeze the last dimension via output = output.squeeze(2).