Hi,
For the medical images, I have to normalize each image’s channel one by one. So for that, I created this code :
def normalizeImage(paths):
img = cv2.imread(paths)
normImg = np.zeros(img.shape)
k = 0
for i in range(img.shape[2]):
if(img[:, :, i].std() != 0):
normImg[:, :, i] = (img[:, :, i] - img[:, :, i].mean()) / (img[:, :, i].std())
k +=1
# print(k)
if k ==3:
array_list.append(normImg)
return normImg
then converting them to tensors:
tensor_x = torch.Tensor(np.array(array_list))
tensor_y = ....
my_dataset = TensorDataset(tensor_x,tensor_y)
my_dataloader = DataLoader(my_dataset,batch_size=16)
However, I couldn’t continue after that. I have three directories:
train: wound/ no_wound
test
val
I am open to applying new methods. How can I do that for the whole dataset?
if(img[:, :, i].std() != 0):
normImg[:, :, i] = (img[:, :, i] - img[:, :, i].mean()) / (img[:, :, i].std())