My program flow is this:
model.train() #train model.eval() x = Variable(input, volatile=True) model(x) #validation optimizer.step()
Is gradient accumulated yet in inference mode with volatile flag set to True? From what I’ve understood
volatile=True means that the computations (then gradient) and the hidden states computed from that input will be forgotten. Is it right?
EDIT: forget I asked this. I recall now that the gradient is accumulated only when I call backward and then I reset it at the start of the epoch. You can delete it.