Doubt with the dimension of MSE loss

Hello everyone.
I have a doubt with the dimension of the loss returned by MSE function.
I’m working with a simple autoencoder:

class autoencoder(torch.nn.Module):
    def __init__(self, D_in):
        super(autoencoder,self).__init__()
        self.lin1 = torch.nn.Linear(D_in, 4)
        self.lin2 = torch.nn.Linear(4,2)
        self.lin3 = torch.nn.Linear(2,4)
        self.lin4 = torch.nn.Linear(4,D_in)
        self.transfer = torch.nn.ReLU()
        
    def forward(self,x):
        x = self.lin1(x)
        x = self.transfer(x)
        x = self.lin2(x)
        x = self.transfer(x)
        x = self.lin3(x)
        x = self.transfer(x)
        x = self.lin4(x)
        return x

I define the following loss function:

loss_fn = torch.nn.modules.loss.MSELoss()

Now I generate a tensor of random numbers, input it to the autoencoder and calculate the error:

npoints, D_in = 100, 6
x = torch.rand([npoints, D_in])

model = autoencoder(D_in)
pred = model(x)
loss = loss_fn(x,pred)

print x.size()
print pred.size()
print loss.data.size()

Output: torch.Size([100, 6])
        torch.Size([100, 6])
        torch.Size([])

Shouldn’t the size of loss be 6? One for each output neuron?

Thank you very much.

The loss is reduced by default. https://pytorch.org/docs/stable/nn.html?highlight=loss#torch.nn.MSELoss You can pass it "reduction=‘none’ and perform your desired reduction.