Loss for Modified LSTM Cell

I want to implement my own version of LSTM but I have a question to ask.
I am using this code (basic LSTM cell): https://github.com/jihunchoi/recurrent-batch-normalization-pytorch/blob/master/bnlstm.py
to modify the LSTM cell.
My question is if I use my version of LSTM and LSTM cell, will loss.backward() update weights. Same for opt.step() .

Should I write code for loss to update weights and optimizer functions?

At first glance, the code you link assembles the LSTM from typical NN-functions like linear. Autograd will provide gradients, but they will be slower than a custom-made gradient – in the PyTorch C++ extension tutorial, you get a ~20% performance boost from moving from Autograd components to a custom backward for the LLTM model (it advertises a 30% speedup for moving from Python + automatic gradients to C++ + custom backward, my experience is that ~10%-pts are for moving to C++ and ~20%-pts are for the backward).
You wouldn’t need to change weight update and optimizer steps.

Best regards

Thomas

Thank you for the answer.
I need to add another weight to the LSTM and change the gate functions of it, so it is the best I can found to modify.
Maybe it will be slower but at least it will work the way I wanted.