The problem of not getting correct accuracy in patched data

Their largest dimension is 1226 * 600 * 224, which is the number of my bands. To convert these photos to 256256224 dimensions, I converted my photos to 141 photos with 256 * 256 * 224 dimensions using the patches command. For train-dataset and test-dataset, I used the Sklearn library resulting in 98 trains and 43 tests.
However, during the subsequent stages of my algorithm, I’m encountering varying accuracies of 5-digit number for each iteration. Alternatively, by including the following command in my code snippet, the accuracy remains the same throughout all iterations, but it is expressed as a percentage:

running_corrects += torch.sum(preds == labels.data).float() / labels.nelement()

I suspect that the issue may lie with the data, but I am unable to pinpoint a solution. Could you please suggest an alternative approach to patching the data? also, I would appreciate it if you could confirm whether my current patching method is correct.

This isn’t possible to debug without runnable code (e.g., your accuracy calculation). You can check the correctness or your accuracy calculation without a model–simply generate random predictions and verify that the result is reasonable for random guessing.

I did not understand your explanation, can you help me more

To verify the correctness of your accuracy calculation, you can try generating random predictions (or even always predict the same class) to check if the reported value makes sense in that case.