Change tanh activation in LSTM to ReLU

The default non-linear activation function in LSTM class is tanh. I wish to use ReLU for my project. Browsing through the documentation and other resources, I’m unable to find a way to do this in a simple manner. The only way I could find was to define my own custom LSTMCell, but here the author says that custom LSTMCells don’t support GPU acceleration capabilities(or has that changed since the article was published?). I need to use CUDA to speed up my training. Any help would be appreciated.

Writing a custom LSTM cell means that we can’t use the optimized LSTM kernels provided by cuDNN, but we will still have the standard GPU acceleration.