**I tried in previous discussions but I need to transform both the keypoint values and the photo in same size
like the tutorial, I cannot extract the data I have given as a sample-dict to examine it after I have converted it and I cannot feed it into the model. Where exactly is the problem?
My transform code : transform = transforms.Compose([transforms.ToPILImage(),transforms.Resize(256),transforms.RandomCrop(224),transforms.ToTensor(),transforms.Normalize(mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225])])
transformedDataset = keypointData(csv_path=keypoint_path_training,image_path=image_train,transform=transform) print('Number of transformed images: ', len(transformedDataset)) valid = int(2*len(transformedDataset)/10) train_set, val_set = torch.utils.data.random_split(transformedDataset, [len(transformedDataset)-valid, valid]) train_set = torch.utils.data.DataLoader(train_set,batch_size = 16,shuffle = True,num_workers=0) val_set = torch.utils.data.DataLoader(val_set,batch_size = 16,shuffle = True,num_workers=0) inputsTrain= next(iter(train_set)) inputsValid = next(iter(val_set))
pic should be Tensor or ndarray. Got <class ‘dict’>.
i try to convert numpy array but its not running