TypeError: Expected input images to be of floating type (in range [0, 1]), but found type torch.uint8 instead
When I was attempting to do this:
import transforms as T
def get_transform(train):
transform = []
# converts the image, a PIL image, into a PyTorch Tensor
transform.append([T.PILToTensor(), T.Normalize()])
if train:
# during training, randomly flip the training images
# and ground-truth for data augmentation
transform.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transform)
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(weights=True)
dataset = four_chs(root = '/home/jn/Downloads/', transforms = get_transform(train=True))
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=True, num_workers=2,
collate_fn=utils.collate_fn)
# For Training
images,targets = next(iter(data_loader))
images = list(image for image in images)
targets = [{k: v for k, v in t.items()} for t in targets]
output = model(images,targets) # Returns losses and detections
I searched some and found something that suggested using a Normalize transform. I’m not sure how to string together multiple transforms, so I tried the code above. That returns this error:
AttributeError: module ‘transforms’ has no attribute ‘Normalize’
The Normalize transform seems to be defined, so I’m not sure why this error happens. How do I address this?
It seems you are importing a custom transforms module and not torchvision.transforms, which doesn’t seem to have the Normalize transformation.
Could you check where this module is defined and also check why it’s used?