Im working on fringe pattern denoising and in that I have fringe patterns like:-
but when I apply the transform of ToTensor()
and then print the image, it turns so bad that my model isn’t learning anything:-
any suggestions on what can I do?
Because I dont think that its possible train my model on the images without converting them into tensors?
Can you share one of the images in your dataset and also the method to visualize transformed images?
It would be great also if you could combine your questions into single topic as they are highly correlated.
The above image is from dataset and the below image is after transform in the image mention in the above question.
I meant the original image file so I can test a few things.
Also, I think the way you visualize your data is not correct, that is why I am asking the method of visualization for your transformed data.
img = tensor.cpu().detach().numpy()
img = img.reshape((512,512))
img, cb, cr = Image.fromarray(img, 'YCbCr').split()
For anyone who is looking for answers!
Follow the below link!
Thank you! @Nikronic