I have the tensor of N×3×128×128 which stands for a batch of images，which range is [-1,1]. Now I want to input it to the pretrained VGG16 to extract feature. However, It requires range [0, 1] , size >224 and mean=[0.485, 0.456, 0.406] ,std=[0.229, 0.224, 0.225].

What can I use to do the aboving process based a batch?(especially for the normalization about the mean and std)？Thanks！

```
def rescale(x,max_range,min_range):
max_val = np.max(x)
min_val = np.min(x)
return (max_range-min_range)/(max_val-min_val)*(x-max_val)+max_range
```

x=(rescale(x,1,0)-mean)/std but do it per channnel

Shouldn’t it be (x - **min_val**)?

If you got an Image of size 128x128. To resize it to 224x224. Import Image and ImageOps from PIL

img = Image.open(path).convert(“RGB”)

img = ImageOps.fit(img, (224,224), Image.ANTIALIAS)

Assuming you had done all this and finally got tensor to 3x224x224 before loading as batches you could possibly do -

((Tensor*2.)+2.)/2.

This should get the tensor in [0,1] range.

For each channel of tensor of N×3×128×128, it has a size of N×128×128. How can I normalize every element of the tensor with corresponding mean=[0.485, 0.456, 0.406] ,std=[0.229, 0.224, 0.225] respectively? I do not want to process it one by one. thanks

Thanks. The input tensor is an intermediate variable rather than read in by PIL. I want to resize and normalize it with the shape of N×3×128×128 directly.

What I am not understanding is why are you so specific in converting it to the mean and std. I think that mean and std are according to your images and are used to bring the input to zero mean and stddev 1.