I’m trying to port some code from keras to pytorch and I’m having some trouble achieving the same loss logic. I want to perform a simlar loss to tf.keras.losses.CategoricalCrossentropy
. Torch CrossEntropyLoss
handles targets with an integer with nclass
while keras does it with a onehot encoder, to overcome this I tried using BCEWithLogtisLoss
but having still not getting it.
I also went with the
I have this code where I test the three loss functions (BCEWithLogtisLoss
, tf.keras.losses.CategoricalCrossentropy
and CrossEntropyLoss
) but cant get the same value:
import torch
import tensorflow as tf
import numpy as np
query = np.array([[2.0,2,],[4,4]])
query_torch = torch.tensor(query)
eye_torch = torch.eye(2,2)
torch_loss = torch.nn.BCEWithLogitsLoss(reduction="sum")
print(f"Loss with torch BCE {torch_loss(query_torch, eye_torch)}")
query_tf = tf.convert_to_tensor(query)
eye_tf = tf.eye(2,2)
tf_loss = tf.keras.losses.CategoricalCrossentropy(
from_logits=True, reduction=tf.keras.losses.Reduction.SUM)
print(f"Loss with tf {tf_loss(query_tf, eye_tf)}")
query_torch = torch.tensor(query)
eye_torch = torch.tensor([0,1])
torch_loss = torch.nn.CrossEntropyLoss(reduction="sum")
print(f"Loss with torch crossEntropyLoss {torch_loss(query_torch, eye_torch)}")
That prints:
Loss with torch BCE 6.29015588760376
Loss with tf 9.759140014648438
Loss with torch crossEntropyLoss 1.3862943611198906
Any idea how to achieve it?