I have a dataset with medical images int16. I have tried to calculate mean and std to use Normalize.

Like this

mean_tr = (train_x.float().mean()/10209)

std_tr = (train_x.float().std()/10209)

mean_te = (test_x.float().mean()/11108)

std_te = (test_x.float().std()/11108)

Where 10209 is the value of the biggest pixel in train, and 11108 in test.

I have something like this :

mean_tr = tensor(0.0439) std_tr = tensor(0.0616)

mean_te = tensor(0.0425) std_te = tensor(0.0586)

I then used this values to Normalize data, with tranform, and I print some values to see what happened. I saw values like 60000, after this step, very big values, and smalest was -0.7. Why? Can you help me with some ideas how to deal with this images?

Thanks

Do you use 3D medical images?

Hi,

Did you perform the transformation `transform.functional.to_tensor()`

firstly? Because it will scale your data in range[0, 1] and then perform normalization on it, you could have a try.

This images are 2D slices from 3D medical images, from what I understand T1.

In documentation, it says that will converts PIL Images, or numpy array that are in range [0, 255].

No one have an answer?

Hi @David_Jitca,

Maybe you can use the preprocessing transforms in TorchIO. Also, sharing a minimal working example would help us help you.