Can tensor values go outside [0,1] range?

Hello!

I’m taking my first steps in machine learning and experimenting with cnns and pooling recently.
I’m trying to implement a fuzzy pooling method, described in the paper by Diamantis et. al. (https://is-innovation.eu/iakovidis/wp-content/uploads/2020/09/IEEE-TFS-Fuzzy-Pooling.pdf)
Though I have implemented most of it, I’m kinda stuck in the part of the membership functions that usually end up in tensors of zeros or ones, messing up the overall results.

In page 3 of the paper’s PDF there’s a schema with a sample patch with values way outside of the [0,1] range. How is that possible in pytorch, since all tensor values will be in the [0,1] range?
The input patches are categorized using membership functions based on a max value of 6 (rmax), but is it possible for a tensor value to go that high - or even above 1 in general?

Sorry if it’s too “newbie” question but this is driving me nuts for days!

Hi @useasdf_4444,

Yes, in torch the values of a tensor can go outside [0, 1] but remember that if you’re using ToTensor somewhere it automatically normalize the values between 0 and 1.