I’m trying to resize my input images. They are pretty big (5312x2988) and I’m trying to shrink them.
This is my code. It’s based on the code in a tutorial:
import transforms as T
def get_transform(train):
transforms = []
# converts the image, a PIL image, into a PyTorch Tensor
transforms.append(T.ToTensor())
transforms.append(T.Resize((400*5312/2988,400))) # *<-- this is where I added T.Resize()*
if train:
# during training, randomly flip the training images
# and ground-truth for data augmentation
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
This is the error message:
AttributeError Traceback (most recent call last)
in ()
1 model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
----> 2 dataset = four_chs(root = ‘/content/drive/MyDrive/four_chambers’, transforms = get_transform(train=True))
3 data_loader = torch.utils.data.DataLoader(
4 dataset, batch_size=1, shuffle=True, num_workers=2,
5 collate_fn=utils.collate_fn)in get_transform(train)
8 # converts the image, a PIL image, into a PyTorch Tensor
9 transforms.append(T.ToTensor())
—> 10 transforms.append(T.Resize((360,360)))
11 if train:
12 # during training, randomly flip the training imagesAttributeError: module ‘transforms’ has no attribute ‘Resize’
I suppose that I’ve coded transforms.Resize incorrectly if it’s being read as an attribute? I wrote that line the way I did because it seems to be the same format as T.ToTensor(). I don’t see the difference between applying ToTensor and Resize. Please tell me how should I code the transforms chunk to apply Resize to the tensors?