Is this how to use the lr late scheduler?

I want to use the AdamW in my efficientDet

   #optimizer = optim.AdamW(model.parameters(), lr=args.lr)
    optimizer = optim.AdamW(model.parameters(), lr=args.lr, betas=(0.9,0.999),eps=1e-08,weight_decay=0.02,amsgrad=False)
scheduler = optim.lr_scheduler.ReduceLROnPlateau(
        optimizer, patience=3, verbose=True)

I’m not sure this is how to use the optimizer for my model.
then I want to know what betas do for this optimizer?

my loss for efficientDet d0 going down very slowly

=============================================================================
6 epoch: 	 start training....
4500 iteration: training ...
    epoch          : 6
    iteration      : 4500
    cls_loss       : 1.8731008768081665
    reg_loss       : 1.0041449069976807
    mean_loss      : 2.7572750695168025
4800 iteration: training ...
    epoch          : 6
    iteration      : 4800
    cls_loss       : 1.754671573638916
    reg_loss       : 0.9150880575180054
    mean_loss      : 2.7382697457438905
5100 iteration: training ...
    epoch          : 6
    iteration      : 5100
    cls_loss       : 1.5807626247406006
    reg_loss       : 0.8592696189880371
    mean_loss      : 2.7208735038097895
    time           : 332.5537312030792
    loss           : 2.720083791859683
epoch_7
=============================================================================
7 epoch: 	 start training....
5400 iteration: training ...
    epoch          : 7
    iteration      : 5400
    cls_loss       : 1.5356632471084595
    reg_loss       : 0.7544331550598145
    mean_loss      : 2.6795950939358284
5700 iteration: training ...
    epoch          : 7
    iteration      : 5700
    cls_loss       : 2.316020965576172
    reg_loss       : 1.078192949295044
    mean_loss      : 2.7014631390371915
    time           : 335.88806772232056
    loss           : 2.698524436832946
epoch_8
=============================================================================
8 epoch: 	 start training....
6000 iteration: training ...
    epoch          : 8
    iteration      : 6000
    cls_loss       : 1.8002285957336426
    reg_loss       : 0.8492593765258789
    mean_loss      : 2.634405281572115
6300 iteration: training ...
    epoch          : 8
    iteration      : 6300
    cls_loss       : 2.22102427482605
    reg_loss       : 0.9751409292221069
    mean_loss      : 2.6694228873293624