Prevent NLL loss reduction operation with v2.0

Hi there,

I want to compute the NLL loss function but I want to prevent that its output is reduced. I have seen that in the master version it has been added a parameter called reduce that I think may be useful in my case. However, I don’t want to use an unstable version on my machines. Do you have any suggestion about how can I easily implement it without compiling the latest version of PyTorch from scratch?



Master should be relatively stable. If you don’t want to build from source, you could write your own in python, something like:

and modify it to work with Variables. Alternatively, you could wait for the next release.