Hi everyone,
I have a 4-dimensional tensor as (T, C, H, W) in which the last two dimensions correspond to the spatial size. What is the easiest way to normalize it over the last two dimensions that represent an image to be between 0 and 1?
If you want to normalize each image in isolation, this code should work:
N, C, H, W = 2, 3, 5, 5
x = torch.randn(N, C, H, W)
tmp = x.view(N, C, -1)
min_vals = tmp.min(2, keepdim=True).values
tmp = (tmp - min_vals) / max_vals
max_vals = tmp.max(2, keepdim=True).values
tmp = tmp / max_vals
x = tmp.view(x.size())
for n in range(N):
for c in range(C):
x_ = x[n, c]
print(n, c, x_.shape, x_.min(), x_.max())
> 0 0 torch.Size([5, 5]) tensor(0.) tensor(1.)
0 1 torch.Size([5, 5]) tensor(0.) tensor(1.)
0 2 torch.Size([5, 5]) tensor(0.) tensor(1.)
1 0 torch.Size([5, 5]) tensor(0.) tensor(1.)
1 1 torch.Size([5, 5]) tensor(0.) tensor(1.)
1 2 torch.Size([5, 5]) tensor(0.) tensor(1.)
1 Like
Thanks, what if I had a few of these 4D tensors and wanted to normalize them dataset-based instead of in isolation?
Iām not sure I understand the use case completely.
If you want to get the mean
and std
of the complete data
tensor, you could use x.mean(dim=[0, 1], keepdim=True)
.