Binary cross entropy error

I am implementing a variational autoencoder. The input has both positive and negative numbers and it is NOT between 0 and 1. I did a log2 transform before training the model. I was wondering if binary_cross_entropy is a good choice in this case? because after the first iteration it gives me the following error:

RuntimeError: Assertion `x >= 0. && x <= 1.' failed. input value should be between 0~1, but got nan at /Users/distiller/project/conda/conda-bld/pytorch_1556653492823/work/aten/src/THNN/generic/BCECriterion.c:60

I’m not sure if your transformation returns output in [0, 1], but the error points to a NaN value.
Could you try to check, why the output contains a NaN?

Thanks for the help!

So I did some modifications to the code and now I’m not getting the error anymore.

So I was able to get rid of the error by adding this to the decoder sigmoid output:

torch.clamp(F.sigmoid(self.fc6(h)), 0, 1)

But still, I wanted to see if that is the right thing to do?

That’s strange, since F.sigmoid should only yield values in the range [0, 1], so that the clamp operation shouldn’t change anything.
Are you sure the clamp fixed this issue?

So I noticed that the problem was because I wan not using sigmoid at the end, and you are right it is good now without needing clamp :slight_smile:

1 Like