I have trained a custom model and wanna add it to my android app. I saved the model with jet but when I run it on my android device I get this error:
com.facebook.jni.CppException: empty not implemented for TensorTypeSet(VariableTensorId, CUDATensorId) (empty at aten/src/ATen/Functions.h:3999)
Any fixes ?
It seems like you’re trying to use a CUDA Tensor in your mobile app. I’m pretty sure that is not supported. You might want to make sure that your model is on the CPU before doing the export.
Any code advice how to do that ?
When I trained my model I moved it to the cpu and then extracted with this code:
example = torch.rand(1, 3, 224, 224)
traced_script_module = torch.jit.trace(model, example)
That looks right. And when you use this model you see the error above?
How do you create the input to the model on the mobile side?
Im using the sample PyTorchDemo app from pytorch. Just changed the model’s name in the code nothing else.
This is weird, maybe someone from the mobile team will have a better idea?
You can also try to double check your model to make sure it does not contain any weight that is on the gpu?
hmm okay thanks for your help.
is there any easy way how to check the location of the weights ?
You can print the
.device field of any Tensor to check where it lives.
Also you can try to install a cpu-only build of pytorch and try to load your model in there. If there is cuda weights in it, it will fail as well, but with a friendlier error message.
Hi! I have the same problem, even with model.cpu().
Did you find a solution?