How to avoid scaling images in torchvision.transforms?

I have a few million 16 bit tif images that I want to load in with a dataloader and train.
I want to apply the following torchvision transforms:
Compose(Random Rotation, Random Crop, Random Horizontal Flip, Rescale, ToTensor, Normalize)

I would like these transformation to occur while scaling the images which are in the range (0,65535) to be in the range (0,1). The problem is something funny is happening to the way the images are getting scaled.
To clarify a little bit, the max pixel value of every image image (when doing np.max) is usually around 5000 when I print the values before any transforms. However, later before going into the ToTensor transform, every image suddenly has a max value of 255. Then, because ToTensor scales [0,255] images to [0,1], images end up being [0,1]. But I think this is incorrect scaling because images seem very bright, brighter than they should normally be.

How can I make it so that this scaling doesn’t occur (so I can just manually divide by 65535 at the end)?

Can you find out which transform causes this?
Maybe try passing one example to one transform at a time and find the root cause?

Thanks. I did that. All the augmenting transforms ((Random Rotation, Random Crop, Random Horizontal Flip, Rescale) seem OK. To circumvent the ToTensor scaling to [0,1], I added some code torch.from_numpy(images).unsqueeze(0) to convert the image to a tensor manually within the getitem function of the dataset after all the previously mentioned transformations are done. Then I can just plug into the normalization transform. If no issues arise here, then I can manually divide by 65536, and it should be good to go.
I am running into a problem however getting the Normalize transform to work. I keep getting this error: TypeError: ‘NoneType’ object is not callable
The below is the snippet of my code:

def __getitem__(self,idx):
    image_path = self.samples[idx]

    image = io.imread(image_path)
    image = (image).astype(np.float32)
    image = torch.from_numpy(image).unsqueeze(0)
    image = self.transform(image)   <<< **error occurs here**

(my definition of the transform): norm_transform = torchvision.transforms.Normalize(mean=data_mean,std=data_std)

my definition of hte dataloader and dataset):
dataset = One_Image_Dataset(dataloader_path, transform=norm_transform)
transform_dataloader = DataLoader(dataset, batch_size=1, shuffle=False)

Below is the stack trace:’ for i in dataloader:
File “/home/multiomyx/.local/share/virtualenvs/PyTorch-rjjGfeb_/lib/python3.7/site-packages/torch/utils/data/dataloader.py”, line 345, in next
data = self.next_data()
File "/home/multiomyx/.local/share/virtualenvs/PyTorch-rjjGfeb
/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 385, in _next_data
data = self.dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/multiomyx/.local/share/virtualenvs/PyTorch-rjjGfeb
/lib/python3.7/site-packages/torch/utils/data/utils/fetch.py", line 44, in fetch
data = [self.dataset[idx] for idx in possibly_batched_index]
File "/home/multiomyx/.local/share/virtualenvs/PyTorch-rjjGfeb
/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 44, in
data = [self.dataset[idx] for idx in possibly_batched_index]
File “Inspect Image.py”, line 34, in getitem
image = self.transform(image)
TypeError: ‘NoneType’ object is not callable

Would you happen to know what could be causing this error? Is it something internal? I don’t think I am passing in a callable as image is class torch.tensor? Thank you.

Question: Are you sure self.transform is not None?

Thanks. The error was on another dataloader. It works now.