Custom loss function even when just handing down MSELoss: expected float, got double. how is that possible

I am currently trying to get another loss function to work, but I always get the same error:

Found dtype Double but expected Float

But how is that possible?

class MSLELoss(nn.Module):
    def __init__(self):
          super().__init__()
          self.mse = nn.MSELoss()
        
      def forward(self, pred, actual):
          return self.mse(pred,actual)

criterion = nn.MSELoss() works fine while
criterion = MSLELoss() does not…

What am I missing?

Thanks in advance!

I cannot reproduce the issue and both approaches work as expected:

criterion = nn.MSELoss()

x = torch.randn(10, 10)
y = torch.randn(10, 10).double()

loss = criterion(x, y)

class MSLELoss(nn.Module):
    def __init__(self):
        super().__init__()
        self.mse = nn.MSELoss()
        
    def forward(self, pred, actual):
        return self.mse(pred,actual)
      
my_criterion = MSLELoss()
loss = my_criterion(x, y)

oh, that is strange.
I will investigate, thanks for checking! I will report results

Yeah that was totally my bad.
I changed MSELoss to L1Loss at the same time as I removed a .float() somewhere.
L! does not seam to care, MSE does.
Also at the same time I tried to use MSLELoss…
Yeah I should go to sleep at 5am.

thanks for showing me the obvious!

Guys: go to sleep and try again the next day before making forum posts :stuck_out_tongue: