How to check whether model loading is successful in version 1.2

According to the release note here,

using torch::jit::script::Module;

std::shared_ptr<Module> m = torch::jit::load("my_model.py");
m->forward(...);

becomes

using torch::jit::script::Module;

Module m = torch::jit::load("my_model.py");
m.forward(...);

what I used to do is add an error checking like the below

if (m == nullptr) std::cerr << "failed to load model" << std::endl;

How can I do it in version 1.2? There doesn’t seem to be a m.empty() or something like that, and I couldn’t find any doxygen like documentation anywhere that have this information either. Please let me know. Thanks.

// Loading your model
const std::string s_model_name = argv[1]; // resnet50_model.pth NOT py.
torch::jit::script::Module module = torch::jit::load(s_model_name, torch::kCUDA);

I meet the same issue, have you found some way to check this?