Normalizing/Un-normalizing image tensors

From my dataset I have three numpy variables/channels which can be characterized as RGB respectively. Furthermore I converted them into tensors using torch.from_numpy(u) and concatenated them into an image tensor as:
HR_data = torch.cat((u_tensor,v_tensor,w_tensor), dim=1)
Then I normalized like this: HR_data_norm = (HR_data - HR_data.mean()) / HR_data.std()
and used TensorDataset to put it into a dataloader.
My question is, should I normalize the numpy arrays before I convert them into tensors or is my method ok?

You shouldn’t see a difference between the numpy and PyTorch approach.
However, usually you would normalize the data using the statistics from the channels, i.e.:

x = torch.randint(0, 256, (3, 224, 224)).float()
y = (x - x.mean([1, 2], keepdim=True)) / x.std([1, 2], keepdim=True)
print(y.mean(), y.std())
> tensor(-4.6021e-08) tensor(1.0000)