Difference between SparseAdam and Adam behavior

Is there a major difference between Adam and SparseAdam implementation?

I’m using the SparseAdam for optimizing the embedding layer in my model, and I noticed that the model requires fewer epochs to converge if I instead used the Adam optimizer to optimize the embedding layer with sparse gradients disabled.

Yes there is. The doc for SparseAdam directly says

In this variant, only moments that show up in the gradient get updated, and only those portions of the gradient get applied to the parameters.