Question regarding Adagrad and RmsProp optimizer

According to implementation in torch.optim.adagrad.py and torch.optim.rmsprop.py, below two optimizers shall be exactly the same:

torch.optim.Adagrad(model.parameters, lr=0.01)
torch.optim.RMSprop(model.parameters, lr=0.01, eps=1e-10, alpha=0.0, momentum=0.0)

However, these two working completely different !!!
Just wonder why~

Well actually they are different. Closed.