Proper scaling of .npy images in pytorch for training

I am doing classification task and my images are in .npy format with 1 channel. After loading them in with numpy I realized that the images have values ranging from [0 - some maximum value] but not reaching anywhere close to 255. Also they happen to be in float format. However I am aware that in pytorch image classification models, it is required to rescale the images that usually have values of [0-255] in uint8 type to [0-1] in float type or rescaling can be avoided if the images are already in the right one.

In my case how do I properly scale the images to be used with say a resnet model for example. I have provided an image for your reference

That’s not the case as PyTorch does not require the user to perform any rescaling or normalization. Of course your training will most likely benefit from normalized inputs, but it’s not a requirement.

With that being said, also note that the usual Normalize transformation will create zero-mean and unit-variance inputs which do have values outside of [0, 1].

For your use case it depends what these samples represent, how they were created etc. You might still be able to compute their mean and var and apply Normalize on them.