 # Normalization [0,1] during training

Hi,
I need to perform a normalization [0,1] over each channel of a tensor [shape(BxCxWxH)] as a part of the model and I wrote this code:

``````def normalize_channels(_x, every=True):
out = torch.ones_like(_x)
for i in range(_x.shape):
if every:
for j in range(_x[i].shape):
min_ = _x[i][j].min()
max_ = _x[i][j].max()
out[i][j].copy_((_x[i][j] - min_) * (1 / (max_ - min_)))
else:
min_ = _x[i].min()
max_ = _x[i].max()
out.copy_((_x[i] - min_) * (1 / (max_ - min_)))
return out
``````

but the computational time increases too much. Someone has any ideas on how to improve the performance and how to remove the indexing?

Hello Maluma!

As a general rule, computations on tensors in pytorch run faster if
you use built-in tensor operations, rather than looping over indicies.

Try running torch.flatten() on the last two dimensions of your tensor
to get a tensor of shape(BxCxL), where L = W*H.

Then do your computations with things like torch.max(), using its
`dim` argument to specify that you want to take the `max` over your
tensor’s last dimension (W*H).

Give it a try, and if you have issues, post your code (working or
not) together with any errors.

(If you do get this working, it might be nice to post a follow-up with
before-and-after timings so we can see how much it helped.)

Good luck.

K. Frank

Thanks for the tip. I wrote this code:

``````def normalize_channels( _x, inplace=False):
tmp = torch.flatten(_x, start_dim=2)
_min = tmp.min(dim=-1)
_max = tmp.max(dim=-1)
del tmp
if inplace:
else: