Hello, I am using OpenCV capture to get frames and pass to PyTorch model.
currently i’m pre processing the frame in the following way:
data_transforms = transforms.Compose(
[
transforms.Resize(INPUT_SIZE),
transforms.CenterCrop(INPUT_SIZE),
transforms.ToTensor(),
transforms.Normalize([0.485,0.456,0.406], [0.229, 0.224, 0.225])
]
)
def preprocess(image):
image = PIL.Image.fromarray(image)
image = data_transforms(image)
image = image.float()
image = image.unsqueeze(0) # Resnet50 model seems to only accepts 4d tensor
return image
while (cap.isOpened()):
ret, frame = cap.read()
img = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
image_data = preprocess(img).to(DEVICE)
But im getting negative values when printing image_data
:
tensor([[[[ 0.1426, 0.1426, 0.1426, ..., 0.0227, 0.0227, 0.0227],
[ 0.1426, 0.1426, 0.1426, ..., 0.0227, 0.0227, 0.0227],
[ 0.1426, 0.1426, 0.1426, ..., 0.0227, 0.0227, 0.0227],
...,
[-2.1179, -2.1179, -2.1179, ..., -2.1179, -2.1179, -2.1179],
[-2.1179, -2.1179, -2.1179, ..., -2.1179, -2.1179, -2.1179],
[-2.1179, -2.1179, -2.1179, ..., -2.1179, -2.1179, -2.1179]]
The values of image_data
should not be in [0-1] values? I’m missing something?