How to update item_embeddings in every epoch and user_embeddings in each batch in an epoch? RuntimeError: Trying to backward through the graph a second time

I was working on a recommendation algorithm and I need user_embeddings and item_embeddings to calculate the score. In the algorithm, user_encoder aims to produce user_embeddings and item_encoder for item_embeddings.

The item_embeddings should be updated in every epoch and user_embeddings should be updated in every batch in an epoch. I was wondering how to complete? The part of code I want is as following:

user_model = UserModel()
item_model = ItemModel()
criterion = nn.BCEWithLogitsLoss().to(device)
optimizer = optim.Adam(list(user_model.parameters()) + list(item_model.parameters()), lr=LEARNING_RATE)

for epoch in range(num_epochs):
    user_model.train()
    item_model.train()
    
    item_embeddings = item_model(input)
    # the size of item_embeddings = [ITEMS NUM, item_embed_dim]
    
    for idx, batch in enumerate(train_dataloader):
        data = batch["data"].to(device)
        label = batch["label"].to(device)
        optimizer.zero_grad()
        user_embeddings = user_model(data, item_embeddings)
        # the size of user_embeddings = [batch size, item_embed_dim]
       
        score = torch.matmul(user_embeddings, item_embeddings.T)
        loss = criterion(score, label)
        loss.backward()
        optimizer.step()

I met RuntimeError: Trying to backward through the graph a second time, but the saved intermediate results have already been freed. Specify retain_graph=True when calling .backward() or autograd.grad() the first time.
Any ideas to fix it? Thanks in advance.