Running backward with different batch sizes

I use an instance of torch::nn::sequential to build a model. In the train step, I pass batch with different size into the model in different steps, as it is usual in most of the on-policy RL algorithm in family of actor-critic algorithms. This is fine as long as I use pytorch. Now, I am developing same algorithm with libtorch. In the first iteration I pass batch of size 24, and when in the second iteration I pass batch of size 13, I get:

terminate called after throwing an instance of 'std::runtime_error'
  what():  Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
Signal: SIGABRT (Aborted)

Process finished with exit code 1

I checked the backward API and it does not have retain_graph (which is available in pytorch API); although it has bool keep_graph. However, I cannot use that too since it is the second input parameter and C++ does not allow to set only the second input without the first one, which is the gradient tensor.
Beside, I am not sure if setting keep_graph=true helps to solve this problem.
I appreciate any help or comment.