Basic implementation of log likelihood loss

I’m new to ML and pytorch and trying to implement some basic algorithms. I’ve been trying to write a simple log loss function, but the accuracy is not what I would expect if I computed the gradients by hand.

out = torch.dot(w, z)
loss = -y*torch.log( torch.sigmoid( out )) - (1 - y)*torch.log( torch.sigmoid(-out))

The problem I’m seeing is that if y = 1 and sigmoid(out) = 0 (or if y = 0 and sigmoid(-out) = 0), log(0) evaluates to NaN. But I feel like this should be a relatively simple function to implement.

(( know I could use the functions in the nn module, but I’m trying to write them myself first).

Any help would be appreciated.

Sigmoid function reaches 0 at -inf. So, excluding numerical issue, you will never reach such a special case.

Apparently torch.sigmoid(z) = 0 for z < -700

That is why I said excluding numerical issue. Mathematically, it is only 0 at -inf.

In reality, you will need numerical stable functions/methods to avoid this.