Their largest dimension is 1226 * 600 * 224, which is the number of my bands. To convert these photos to 256256224 dimensions, I converted my photos to 141 photos with 256 * 256 * 224 dimensions using the patches command. For train-dataset and test-dataset, I used the Sklearn library resulting in 98 trains and 43 tests.
However, during the subsequent stages of my algorithm, I’m encountering varying accuracies of 5-digit number for each iteration. Alternatively, by including the following command in my code snippet, the accuracy remains the same throughout all iterations, but it is expressed as a percentage:
running_corrects += torch.sum(preds == labels.data).float() / labels.nelement()
I suspect that the issue may lie with the data, but I am unable to pinpoint a solution. Could you please suggest an alternative approach to patching the data? also, I would appreciate it if you could confirm whether my current patching method is correct.