How can I normalize my input data efficiently using a different mean & stddev for each input feature?

Before I used:

```
transform =transforms.Normalize((single_mean,), (single_std,))
input = transform(torch.tensor(inputs))
```

in my dataloader. If I change it to

```
input = np.divide(np.substract(inputs, vector_mean), vector_stddev)
input = torch.tensor(input)
```

my dataloader becomes slower. Does somebody have a more efficient way to do this? Thanks!

**Edit**

The reduce in speed was caused by another change in the code. The line

```
torch.multiprocessing.set_start_method('spawn')
```

was added at the same time, which is the reason for the dataloader to slow down.

By “each input feature” do you mean e.g. each channel?

If so, you could just pass multiple values to `transforms.Normalize`

:

```
transform = transforms.Normalize((a, b, c), (d, e, f))
```

Thank you for the reply! By each feature I mean each pixel, as this would be beneficial in my application.

Thanks for clearing it up!

Could you try to use PyTorch methods instead of the numpy ones and see the timing difference?

@ptrblck I realized there is only a slight reduce in speed and edited the question correspondingly. Therefore I conclude, that transforms.Normalize() cannot be used to normalize with matrices and so using standard numpy or PyTorch operations is the way to go. Sorry for the confusion.