Hi both,
thank you for your replies. After the remark from @KFrank I updated my PyTorch version to 1.10.2+cu102; now apparently the problem with the Linear layer seems to be fixed but it is now moved to the loss function I use. This does not surprise me since I use the following function:
def categorical_crossentropy(pred : torch.Tensor, true: torch.Tensor) -> torch.Tensor:
""" Calculates the categorical crossentropy loss function.
"""
return nn.NLLLoss()(torch.log(pred), true)
To be clear, I’m trying to use this function because my goal is to port this model from Keras to PyTorch, and the original model used the categorical crossentropy loss function to estimate the loss. When trying to make the port, I used the suggestion proposed in this thread. Of course I’m not entirely sure that this is correct at this point.
EDIT: this is what the traceback now looks like:
Traceback (most recent call last):
File "IrisBrevitas.py", line 132, in <module>
train(iris_model, train_in, train_out, categorical_crossentropy)
File "IrisBrevitas.py", line 94, in train
loss = loss_fn(pred, y)
File "IrisBrevitas.py", line 77, in categorical_crossentropy
return nn.NLLLoss()(torch.log(pred), true)
File "/home/jacopo/.local/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jacopo/.local/lib/python3.8/site-packages/torch/nn/modules/loss.py", line 211, in forward
return F.nll_loss(input, target, weight=self.weight, ignore_index=self.ignore_index, reduction=self.reduction)
File "/home/jacopo/.local/lib/python3.8/site-packages/torch/nn/functional.py", line 2532, in nll_loss
return torch._C._nn.nll_loss_nd(input, target, weight, _Reduction.get_enum(reduction), ignore_index)
RuntimeError: expected scalar type Long but found Float