Adjusting contrast doesn't seem to work

Hi All,

I’m new to pytorch and trying to get into it, currently focusing on vision. I’ve been trying to add augmented images to my dataset, but I can’t seem to get contrast correction to work properly. I struggled for a while with having my grayscale images completely changed when applying any sort of contrast or brightness transformation until I figured out I had to perform this transformation before other transformations (why?).
I can now get the brightness adjusted using colorjitter, but when I apply the contrast transformation nothing happens.
I’m also getting a weird result that when I try to apply adjust_contrast directly on the image I get the following error:
Dimension out of range (expected to be in range of [-2, 1], but got -3)
Adjust brightness works (but completely changes the image, as before), so I’m really confused about this.

Here’s some code to showcase the issue:

mean = ([0.5])
std = ([0.2])
image_size = 256

data_transforms = transforms.Compose([transforms.Grayscale(num_output_channels=1),
                                        transforms.Resize([image_size, image_size]),
                                        transforms.ToTensor(),
                                        transforms.Normalize(mean, std)])
                                        
ds_no_aug = datasets.ImageFolder(data_path/'val', data_transforms)
dl_no_aug = DataLoader(ds_no_aug, batch_size=1, shuffle = False, num_workers=4)
ds_aug = datasets.ImageFolder(data_path/'val', transforms.Compose([transforms.ColorJitter(contrast=0.7), data_transforms]))
dl_aug = DataLoader(ds_aug, batch_size=1, shuffle = False, num_workers=4)
example_no_aug = iter(dl_no_aug)
sample_no_aug,_ = next(example_no_aug)
example_aug = iter(dl_aug)
sample_aug, _ = next(example_aug)

im_no_aug = sample_no_aug[i][0]
im_aug = sample_aug[i][0]

plt.subplot(1,2,1)
plt.imshow(im_no_aug, cmap = 'gray')

plt.subplot(1,2,2)
plt.imshow(im_aug, cmap = 'gray')

And here’s the output:
test

Any ideas as to what I’m doing wrong will be very welcome!

Do you mean actually nothing happens or that the change does not appear to visually significant? It does look like something is being altered in the example image shown, and I would expect the output values to be different. Additionally, when ColorJitter is used, the results would differ from iteration to iteration.

However, the out of range dimension does appear strange, and I would double check that you are passing a 3D (not 4D) tensor to the adjust_contrast function.

Sorry, you’re right - I previously ran it with higher values of contrast and saw no change, but I guess it’s because the RNG made a very small change. Also, I guess the same value of change in contrast is less substantial than in the brightness, so this value actually only makes a small difference.
The image is 3d, I checked that (and it does work with adjust brightness).
Why is the brightness/contrast adjustment so different when done on the normalized and non-normalized images?

If you are passing an image that has values of type float or really any non-integral type, then many functions (including contrast) that assume a maximum value will break if your image is not normalized in terms of range. Unlike e.g., 8-bit RGB, there is no convenient max value for float images, and if your input contains values outside the expected max of 1 (see vision/functional_tensor.py at 135a0f9ea9841b6324b4fe8974e2543cbb95709a · pytorch/vision · GitHub ), then this would likely lead to unexpected results.

But the contrast and brightness break AFTER normalization, not before. That’s what I don’t understand.
Here’s what it looks like if I switch the order of the transforms so that the contrast comes last (no matter what value I take for contrast, this one is with contrast=0.00001):
test

If the values are saturating it might make sense to successively remove transformations and check if/when the values are being pushed outside of [0, 1].