Why does Transforms.ToTensor convert the range to 0..1?

What’s the reason behind Transforms.ToTensor converting the value range from 0 to 1? Let’s say If the input image is in 0 - 255 scale (or any other scale for that matter), why not just leave it like that, and allow the user to later transform the data in whatever way he wants?

1 Like

Because in 99% of the cases and in convention when training networks we use [0, 1] image range, with possible normalization following to make it 0 mean and 1 std. You can write you own transform that does other things if you want to.

2 Likes

Neural networks process inputs using small weight values, and inputs with large integer values can disrupt or slow down the learning process. As such it is good practice to normalize the pixel values so that each pixel value has a value between 0 and 1.

1 Like