Hi guys, I have a problem with loading model.
I am training model with batch size 32 of training DataLoader, then make predictions with batch size 1 of testing DataLoader and everything works fine. But if I save this model and then load
You gave me hint, thank you so much. I already did it and there is no mistake.
There is another problem, the data is very big and inference of this code takes too much time.
Do you have any idea how to speed up this code? I think the problem is in cycling and appending every batch. Thanks in advance.
Do you mean the error in the first post is gone with eval()?
I would think using multiple GPUs could help generally, but I personally cannot help on this part, sorry.
Since, you are already doing it in inference mode, you don’t have to worry about the expensive gradient computations.
Didn’t quite get you here, could you please elaborate?
Yes, with model.eval() everything started to work well.
It is idea, I will check.
Well, as I understand this code, it takes every batch, makes prediction (in my case inference is a tensor of 4 digits), then takes max out of these 4 digits and append result to list. So, I was thinking is there any idea to optimize this cycle?