Text classification unexpectedly slow

How can the number of input elements not be divisible by the batch size?

I mean that the computational graph is fixed. If you use the same input size at every step, and you don’t have control flow (for, while, if) inside your model, it’s likely to be the same every time and benchmark will help you. Just try to enable it, if it runs faster leave it on, otherwise turn it off.

1 Like

I tried it. It is faster. Thanks!

Hi, I’m coming across the exact same problem, and have no idea how to swap the axes, how did you make it?

Hi, Xia_Yandi, how do you solve the problem? I set torch.backends.cudnn.enabled = False and torch.backends.cudnn.benchmark = True, but it did not help. can you share your solution? and how to reshape the inputs as @kim.seonghyeon said?
thank you~