Hi all,
When I run .forward<>() on an AnyModule in Libtorch, if it ends up using too much GPU memory it calls a c10::CUDAOutOfMemoryError exception followed by a c10::Error exception, as expected.
However, if I wrap that function in a try/catch statement, it doesn’t get caught. The program just crashes and shows the exceptions arising in their respective Libtorch files.
How can I catch this exception and prevent the program from crashing?