AttributeError: 'Adam' object has no attribute 'step'

Can anyone assist me in solving this error AttributeError: ‘Adam’ object has no attribute ‘step’. When Bert Model is trained with NER, this error occurs. I could not solve this problem, perhaps due to the change in libraries, since it mainly stems from the update parameters step().

AttributeError Traceback (most recent call last)
in
30 torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
31 # update parameters
—> 32 optimizer.step()
33 model.zero_grad()
34 # print train loss per epoch

1 frames
/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py in getattribute(self, name)
864 “”“Overridden to support hyperparameter access.”“”
865 try:
→ 866 return super(OptimizerV2, self).getattribute(name)
867 except AttributeError as e:
868 # Needed to avoid infinite recursion with setattr.

AttributeError: ‘Adam’ object has no attribute ‘step’

The codes are below:

from transformers import AdamW
from transformers import get_linear_schedule_with_warmup
from transformers import BertTokenizer, BertForSequenceClassification
from transformers import DataCollatorForTokenClassification
from transformers import AutoModelForTokenClassification

epochs = 5
max_grad_norm = 1.0

for _ in trange(epochs, desc=“Epoch”):
# TRAIN loop
model.train()
tr_loss = 0
nb_tr_examples, nb_tr_steps = 0, 0
for step, batch in enumerate(train_dataloader):
# add batch to gpu
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch
# forward pass
token_classifier_output = model(b_input_ids,
token_type_ids=None,
attention_mask=b_input_mask,
labels=b_labels)
token_classifier_output.loss.backward()
# track train loss
tr_loss += token_classifier_output.loss.item()
nb_tr_examples += b_input_ids.size(0)
nb_tr_steps += 1
# gradient clipping
torch.nn.utils.clip_grad_norm_(parameters=model.parameters(), max_norm=max_grad_norm)
# update parameters
optimizer.step()
model.zero_grad()
# print train loss per epoch
print(“Train loss: {}”.format(tr_loss/nb_tr_steps))
# VALIDATION on validation set
model.eval()
eval_loss, eval_accuracy = 0, 0
nb_eval_steps, nb_eval_examples = 0, 0
predictions , true_labels = [], []
for batch in valid_dataloader:
batch = tuple(t.to(device) for t in batch)
b_input_ids, b_input_mask, b_labels = batch

    with torch.no_grad():
         tmp_eval_loss = model(b_input_ids,
                            token_type_ids=None,
                            attention_mask=b_input_mask,
                            labels=b_labels)
         logits = model(b_input_ids, token_type_ids=None,
                       attention_mask=b_input_mask)
    logits = logits.detach().cpu().numpy()
    label_ids = b_labels.to('cpu').numpy()
    predictions.extend([list(p) for p in np.argmax(logits, axis=2)])
    true_labels.append(label_ids)
    
    tmp_eval_accuracy = flat_accuracy(logits, label_ids)
    
    eval_loss += tmp_eval_loss.mean().item()
    eval_accuracy += tmp_eval_accuracy
    
    nb_eval_examples += b_input_ids.size(0)
    nb_eval_steps += 1
eval_loss = eval_loss/nb_eval_steps
print("Validation loss: {}".format(eval_loss))
print("Validation Accuracy: {}".format(eval_accuracy/nb_eval_steps))
pred_tags = [tags_vals[p_i] for p in predictions for p_i in p]
valid_tags = [tags_vals[l_ii] for l in true_labels for l_i in l for l_ii in l_i]
print("F1-Score: {}".format(f1_score(pred_tags, valid_tags)))

Where have you defined the optimizer?

I.e.:

optimizer=Adam(model.parameters(), lr=lr, ...)

Thank you for your response. The optimizer is defined here:

FULL_FINETUNING = True
if FULL_FINETUNING:
param_optimizer = list(model.named_parameters())
no_decay = [‘bias’, ‘gamma’, ‘beta’]
optimizer_grouped_parameters = [
{‘params’: [p for n, p in param_optimizer if not any(nd in n for nd in no_decay)],
‘weight_decay_rate’: 0.01},
{‘params’: [p for n, p in param_optimizer if any(nd in n for nd in no_decay)],
‘weight_decay_rate’: 0.0}
]
else:
param_optimizer = list(model.classifier.named_parameters())
optimizer_grouped_parameters = [{“params”: [p for n, p in param_optimizer]}]
optimizer = Adam(optimizer_grouped_parameters, lr=3e-5)

It seems you are trying to use a Keras optimizer in PyTorch, which won’t work:

/usr/local/lib/python3.8/dist-packages/keras/optimizers/optimizer_v2/optimizer_v2.py in getattribute(self, name)
864 “”“Overridden to support hyperparameter access.”“”
865 try:
→ 866 return super(OptimizerV2, self).getattribute(name)
867 except AttributeError as e:
868 # Needed to avoid infinite recursion with setattr.

AttributeError: ‘Adam’ object has no attribute ‘step’

(see the file path)

Use torch.optim.Adam and it should work.

Thank you, it works.