I know model.train() is called at the start of every epoch. But can I call model.train() after every batch. My code is working when I called it after every batch. I just wanted to know if there are any downsides to it.
Something like this:
for every epoch:
for every batch:
# some lines of training code
You could call it after each batch, but this would not be necessary, if you haven’t called
model.eval() between these calls.
model.train() changes the internal
training flag, which then changes the behavior of some modules (e.g. dropout will be disabled during evaluation).
Thanks. Yes I did call model.eval() after each batch, that’s why I asked. Thanks for the solution.