I work in python 2.7 , I noticed when summing a logical values as a byte tensor sometimes Torch.sum() gives wrong values which resolves when the logical is type casted to int etc . Can Somebody give me a reason for this . Thank You
This is a byte overflow. A common way is to avoid it is to cast: count = flag.long().sum()
.
Best regards
Thomas
Oh, Thank you very much, I never encountered this in numpy . Maybe torch.nonzero() is a an efficient option ?
If you don’t need the count, you can use .any()
. (There is .all()
, too.)
Best regards
Thomas
Thank you for your kind clarification