Torch tensors work with range [0, 1]

So here I am tumbling through the many tutorials, blogs and web pages related to DL as part of my morning routine, when I stumble across this statement in one of pytorchs’ official tutorials

An important detail to note is that neural networks from the torch library are trained with tensor values ranging from 0 to 1. If you try to feed the networks with 0 to 255 tensor images, then the activated feature maps will be unable sense the intended content and style. However, pre-trained networks from the Caffe library are trained with 0 to 255 tensor images.

that makes me wonder

  • Does the re-scaling into the unit interval happen automatically?
  • what changes if I rescale the inpurt manually before?
  • what if my input has negative values?

Hey there! …again :wink:

  • From the docs, yes - if it’s a PIL image or numpy array with the datatype uint8. Otherwise not. Read here for more info
  • Depends on how you rescale them. It is possible that they are scaled twice if your manual scaling outputs something that fits the criteria in the bullet point above.
  • The pretrained networks in pytorch expect a value from 0 - 1 as you quoted in the OP. If you give them any other range, negative values, scale them twice or whatever they will just work poorly. It doesn’t mean that you have to work with the range of 0 - 1, but you’d have to retrain the network if any other scale is desired

:D! GL

hi Olof ; )

thank you for your reply!

So if I train a model from scratch this issue shouldnt matter in the first place?

Exactly :slight_smile: