Is it possible to extend/apply the transforms.Normalize to normalize multidimensional tensor in custom pytroch dataset class? I have a tensor with shape (S x C x A x W)
and I want to normalize on C dimension, here S: sequence length, C: feature channel, A: feature attributes, W: window length of each sub-sequence data.
You could apply the normalization manually to avoid the dimension check:
x = torch.randn(2, 3, 4, 5)
mean = torch.tensor([0.5, 0.5, 0.5])
std = torch.tensor([0.5, 0.5, 0.5])
x.sub_(mean[None, :, None, None]).div_(std[None, :, None, None])
torchvision.transforms.functional.normalize
uses a similar code as seen here.
Thanks a lot for the quick reply and concise solution
I have a naive confusion. I think this approach doesn’t guarantee that the data will be in [-1,+1] range. We should have to set the mean and std manually. am I right?
Is there any quick and concise approach(like this one) to normalize the multidimensional tensor to the normal distribution?
I found another solution
import torch.nn.functional as f
f.normalize(input, p=2, dim=2)
But I am not sure whether this approach will normalize data on a normal distribution.
My code snippet will standardize the data (also knows as z-scores), so that it will have a zero mean and unit variance. I’m not sure I understand your use case completely. Would you like to pass any distribution to the normalization method and get a normal distribution as the output?