CUDA error not finishing program (and consequently freeing memory)

I am running pytorch 0.4.1 on a dual TITAN X setup. I am doing a grid search to see how deep and wide I can create my model before it gives this error. The problem is that since it is a grid search, intended to run various combinations, I left my code running for hours, and when I came back, the code crashed in the 5th combination of parameters, but the code froze there, and did not finished. When I press ctrl + c it finishes the process, frees the memory, and goes to the next combination, but I have to manually kill the process. Is there a way to, when my process gives CUDA error, it automatically ends the run?