Oscillations in loss function given fixed sample

In the oldest gradient descent where batch=whole fixed sample, should the loss function at all be able to oscillate over epochs? Assuming the following algorithm:

for epoch in epochs
adam.step()
end

I think it should just decrease monotonically given a fixed sample. If the sample has randomness over epochs, then it’s possible. Has adam anything to do with that, since adam isn’t the oldest gradient descent?

Generally speaking, even in a convex optimisation problem you might have the loss oscillating. For instance, this could happen if you are close to the global optima but your learning rate is such that you overshoot and end up in a higher loss region. To avoid that, you can use more complicated algorithms such as: Backtracking line search - Wikipedia or Wolfe conditions - Wikipedia that make sure that you won’t overshoot. Those would eventually give you some guarantees about the monotonic convergence.

1 Like

thank you. Is there a way of degenerating adam back to the oldest GD by tunning adam’s parameters?

I am not sure why you want to do that, but if you set weight decay and both betas to 0, then you would basically get an inefficient SDG…

1 Like