Hello. I was using leaky_relu and found that leaky_relu causes GPU memory leak. The training can run for only several iterations and encounter out of memory error. When I tried relu, the training ran usually using around 7GB. I am wondering if it is a bug in leaky_relu?
PS: it seems that the memory leak happens only on Windows. My code with lReLU works fine on Linux .