Max value for contrast in pythorch transform

Hi,
What is the maximum value for contrast in transformation?

Based on the docs for transforms.functional.adjust_contrast:

  • contrast_factor (float) – How much to adjust the contrast. Can be any non negative number. 0 gives a solid gray image, 1 gives the original image while 2 increases the contrast by a factor of 2.

Hi, thanks for response :pray: :pray:
I have a cnn for color image classification, the accuracy of the model was very low, after changing the contrast, the accuracy improved.
The values I considered for colorjitter include:
transforms.ColorJitter(brightness=(1.5), contrast=(0.9), saturation=(1.5), hue=(-0.1,0.1))
I want to know the values must be between 0 and 1 ?
Are the values I considered correct?
I do not understand the interpretation of the original document.Thanks, guide me.

No, the values can be in the range [0, +Inf] (of course a very high positive value would distort the image at one point).
To lower the contrast, use values in [0, 1], to increase it use values >1.

Thanks.
I have another problem, when I execute the code with validation data, at the end of the first epoch, this error ends.
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:166: UserWarning: Implicit dimension choice for softmax has been deprecated. Change the call to include dim=X as an argument.
Epoch [1/10], Step [50/220], Loss: 0.6466
Epoch [1/10], Step [100/220], Loss: 0.6466
Epoch [1/10], Step [150/220], Loss: 0.5845
Epoch [1/10], Step [200/220], Loss: 1.0444

RuntimeError Traceback (most recent call last)
in ()
21
22 # forward pass
—> 23 outputs = model(images)
24 loss = criterion(outputs, labels)
25

1 frames
in forward(self, x)
173 # print(main_out.shape)
174 # print(Attentiom_map.shape)
→ 175 out1 = torch.matmul(main_out, Attentiom_map)
176 max_pool1 = self.pool(torch.add(main_out, out1))
177 # print(max_pool1.shape)

RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

Based on the error message it seems that main_out and Attentiom_map have shapes, which are incompatible for the torch.matmul operation.
In your code you’ve already added the print statements for the shapes, so you could (re-)use it and make sure both tensors have the expected shape.
Here is a small example of your error:

# works
a, b = torch.randn(2, 10, 20), torch.randn(2, 20, 10)
c = torch.matmul(a, b)

# raises your error
a, b = torch.randn(2, 10, 20), torch.randn(3, 20, 10)
c = torch.matmul(a, b)
> RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

yes, I printed the tensors, In one of the steps, the first parameter(batch) becomes 2? But I do not know why this happens.

I made them the same size, but in the last step, when main_out batch equals 2, Attentiom_map batch 3 remains.
But I do not know how to resize them

torch.Size([3, 64, 349, 349])
torch.Size([3, 64, 199, 349])
torch.Size([3, 64, 349, 349])
torch.Size([3, 64, 199, 349])
torch.Size([3, 64, 349, 349])
torch.Size([2, 64, 199, 349])
torch.Size([3, 64, 349, 349])

RuntimeError Traceback (most recent call last)
in ()
21
22 # forward pass
—> 23 outputs = model(images)
24 loss = criterion(outputs, labels)
25

1 frames
in forward(self, x)
173 print(main_out.shape)
174 print(Attentiom_map.shape)
→ 175 out1 = torch.matmul(main_out, Attentiom_map)
176 max_pool1 = self.pool(torch.add(main_out, out1))
177 # print(max_pool1.shape)

RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0

main_out = self.pool1(F.relu(self.conv(x)))
Attentiom_map = Attentiom_map.repeat(3, 64, 1, 1)
Attentiom_map = F.interpolate(Attentiom_map, size=(main_out.data[0].shape[2], main_out.data[0].shape[2]), mode=‘bilinear’, align_corners=False)
out1 = torch.matmul(main_out, Attentiom_map)