torchvision.transform.Colorjitter report Image has a wrong mode

bb=PIL.Image.fromarray(np.squeeze(temp))
torchvision.transforms.ColorJitter(brightness=0.4,saturation=0.4,contrast=0.4,hue=0.4)(bb)

the information of temp is as follows:
temp.dtype,temp.shape
(dtype(‘float32’), (65, 65, 1))

ValueError Traceback (most recent call last)
in ()
----> 1 torchvision.transforms.ColorJitter(brightness=0.4,saturation=0.4,contrast=0.4,hue=0.4)(bb)

~/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py in call(self, img)
754 transform = self.get_params(self.brightness, self.contrast,
755 self.saturation, self.hue)
→ 756 return transform(img)
757
758 def repr(self):

~/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py in call(self, img)
47 def call(self, img):
48 for t in self.transforms:
—> 49 img = t(img)
50 return img
51

~/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py in call(self, img)
281
282 def call(self, img):
→ 283 return self.lambd(img)
284
285 def repr(self):

~/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/transforms.py in (img)
725 if brightness > 0:
726 brightness_factor = random.uniform(max(0, 1 - brightness), 1 + brightness)
→ 727 transforms.append(Lambda(lambda img: F.adjust_brightness(img, brightness_factor)))
728
729 if contrast > 0:

~/anaconda3/envs/pytorch4/lib/python3.6/site-packages/torchvision-0.2.1-py3.6.egg/torchvision/transforms/functional.py in adjust_brightness(img, brightness_factor)
448
449 enhancer = ImageEnhance.Brightness(img)
→ 450 img = enhancer.enhance(brightness_factor)
451 return img
452

~/anaconda3/envs/pytorch4/lib/python3.6/site-packages/PIL/ImageEnhance.py in enhance(self, factor)
35 :rtype: :py:class:~PIL.Image.Image
36 “”"
—> 37 return Image.blend(self.degenerate, self.image, factor)
38
39

~/anaconda3/envs/pytorch4/lib/python3.6/site-packages/PIL/Image.py in blend(im1, im2, alpha)
2629 im1.load()
2630 im2.load()
→ 2631 return im1._new(core.blend(im1.im, im2.im, alpha))
2632
2633

ValueError: image has wrong mode

Currently you are passing an image in ‘F’ mode, i.e. as flaot32.
Unfortunately it seems PIL.Image.blend doesn’t work with this kind of image.
Could you convert your image to another format, e.g. L or would this destroy your data?

ColorJitter can process images with one channel, namely n*n*1? I guess that because the image has only one channel not three or four channel.

I thought the same, but apparently at least PIL has no problems with grayscale images.
It seems images in L mode (grayscale in uint8) work fine, while F gives the error in Image.blend.

I am agree with you. So, colorjitter is not applied to these images with F model. The pixel value is decimal, not an integer. I can not convert it to the images with ‘L’ mode.

In that case you could implement the contrast and brightness manipulation manually. Saturation and hue manipulations don’t really make sense regarding a grayscale image.

The brightness values should be manipulated additively, while the contrast should be changed multiplicatively. Your new image should therefore be calculated as:

image = image * contrast_factor + brighness_factor
1 Like

is it possible to change the brightness of all images with one value?
I want to augment the whole dataset in getitem of the dataloader, but colorjitter works randomly.

You could use the functional API and call torchvision.transforms.functional.adjust_brightness on each Image.

Thank you. is there a way to apply augmentation on torch tensor it self?
I used :
im_orig_tosave = Image.fromarray(np.uint8(padding_data))
enhancer = ImageEnhance.Brightness(im_orig_tosave)
enhanced_im = enhancer.enhance(0.6)
enhanced_im = np.minimum(enhanced_im, 255.)
padding_data = enhanced_im
# pil image to tensor!
I need to convert it back to a tensor.
so I asked if I can directly modify the tensor in order to change the brightness

If I recall correctly, the brightness is changed by adding a constant to the image (and clipping to e.g. uint8).
If you have a tensor in the right range, image_tensor + offset should do it.
(The contrast should be manipulated with a scalar multiplication.)

I found that torvision.transforms.Colorjitter only accepts RGB image and not single channel image, L.
Is there any other way to increase contrast of Graysclae Image ?
Thanks.

If my previous assumption is correct, you could multiply the grayscale image with a scalar factor to change the brightness (and clip it afterwards, if needed).