@ptrblck , below is the part of my dataset class code:
for i, f in enumerate(files):
print("file_name", f)
nib_file = nib.load(os.path.join(images_path, f))
img = nib_file.get_fdata('unchanged', dtype=np.float32)
lbl = nib.load(os.path.join(data_path, 'Silver-standard', self.folder, f[:-7] + '_ss.nii.gz')).get_fdata(
'unchanged', dtype=np.float32)
if self.scale:
transformed = scaler.fit_transform(np.reshape(img, (-1, 1)))
self.img = np.reshape(transformed, img.shape)
if not self.sagittal:
self.img = np.moveaxis(img, -1, 0)
if self.rotate:
self.img = np.rot90(img, axes=(1, 2))
if img.shape[1] != img.shape[2]:
self.img = self.pad_image(img)
if not self.sagittal:
self.lbl = np.moveaxis(lbl, -1, 0)
if self.rotate:
self.lbl = np.rot90(lbl, axes=(1, 2))
if lbl.shape[1] != lbl.shape[2]:
self.lbl = self.pad_image(lbl)
spacing = [nib_file.header.get_zooms()] * img.shape[0]
self.voxel_dim = spacing
if i ==1: # done only to understand the shapes
break
def __getitem__(self, idx):
data = torch.from_numpy(self.img[idx])
labels = torch.from_numpy(self.lbl[idx])
return data, labels
Now in order to check the data is loaded as desired, I am trying to check the dimesion of 3D image on the first index of train_data
as follows:
train_data = cc359_volume(config)
print("shape", train_data[0][1])
I expected it to give 200, 256,256
, as nib_file_dim was (200, 256, 256) but it give the shape:
shape torch.Size([256, 256])
That means, the dataset class is not returning 3D volume as one entity. Rather, it is giving back one slice as indifvidual entity. Also I checked the dimesnion in loader as follows:
train_features, train_labels = next(iter(train_loader))
print(f"Feature batch shape: {train_features.size()}")
print(f"Labels batch shape: {train_labels.size()}")
It gives the following:
Feature batch shape: torch.Size([2, 256, 256])
Labels batch shape: torch.Size([2, 256, 256])
How can I get the datset to return the image as whole.