Liquidmasl
(Liquidmasl)
December 15, 2021, 4:34am
1
I am currently trying to get another loss function to work, but I always get the same error:
Found dtype Double but expected Float
But how is that possible?
class MSLELoss(nn.Module):
def __init__(self):
super().__init__()
self.mse = nn.MSELoss()
def forward(self, pred, actual):
return self.mse(pred,actual)
criterion = nn.MSELoss()
works fine while
criterion = MSLELoss()
does not…
What am I missing?
Thanks in advance!
ptrblck
December 16, 2021, 7:32am
2
I cannot reproduce the issue and both approaches work as expected:
criterion = nn.MSELoss()
x = torch.randn(10, 10)
y = torch.randn(10, 10).double()
loss = criterion(x, y)
class MSLELoss(nn.Module):
def __init__(self):
super().__init__()
self.mse = nn.MSELoss()
def forward(self, pred, actual):
return self.mse(pred,actual)
my_criterion = MSLELoss()
loss = my_criterion(x, y)
Liquidmasl
(Liquidmasl)
December 18, 2021, 5:21pm
3
oh, that is strange.
I will investigate, thanks for checking! I will report results
Liquidmasl
(Liquidmasl)
December 18, 2021, 6:10pm
4
Yeah that was totally my bad.
I changed MSELoss to L1Loss at the same time as I removed a .float() somewhere.
L! does not seam to care, MSE does.
Also at the same time I tried to use MSLELoss…
Yeah I should go to sleep at 5am.
thanks for showing me the obvious!
Guys: go to sleep and try again the next day before making forum posts