I am loading just the stored model, rather than the state_dict. If you think this might be the root cause I will look further into that.
The error actually occurs when I try to do the actual inference. When I have full access to the GPU it runs fine, but when I direct it to the CPU it throws the error that it was expecting a Cuda device.
To give some more context, I’m actually trying to use jit.trace to serialize the inference so I can deploy on mobile. Here is the code I have:
model_dir = "/home/benjamin/imagelift/Odom_reader/trained_char_det/signatrix_efficientdet_coco.pth"
gtf = torch.load(model_dir)
example_inputs = torch.rand(1, 3, 512, 512)
odom_detect = torch.jit.script(gtf,example_inputs)
The error this returns refers to the jit.trace line:
/home/benjamin/.local/lib/python3.6/site-packages/torch/serialization.py:657: SourceChangeWarning: source code of class ‘torch.nn.modules.conv.Conv2d’ has changed. you can retrieve the original source code by accessing the object’s source attribute or set torch.nn.Module.dump_patches = True
and use the patch tool to revert the changes.
warnings.warn(msg, SourceChangeWarning)
/home/benjamin/AndroidStudioProjects/Odom_detect/app/src/main/python/src/model.py:251: TracerWarning: Converting a tensor to a Python index might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if len(inputs) == 2:
/home/benjamin/AndroidStudioProjects/Odom_detect/app/src/main/python/src/utils.py:84: TracerWarning: Converting a tensor to a Python integer might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
image_shape = np.array(image_shape)
/home/benjamin/AndroidStudioProjects/Odom_detect/app/src/main/python/src/utils.py:96: TracerWarning: torch.from_numpy results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
anchors = torch.from_numpy(all_anchors.astype(np.float32))
/home/benjamin/AndroidStudioProjects/Odom_detect/app/src/main/python/src/model.py:282: TracerWarning: Converting a tensor to a Python boolean might cause the trace to be incorrect. We can’t record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
if scores_over_thresh.sum() == 0:
Traceback (most recent call last):
File “/home/benjamin/.config/JetBrains/PyCharmCE2020.1/scratches/scratch_1.py”, line 137, in
infer_test()
File “/home/benjamin/.config/JetBrains/PyCharmCE2020.1/scratches/scratch_1.py”, line 113, in infer_test
odom_detect = torch.jit.trace(gtf,example_inputs)
File “/home/benjamin/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 875, in trace
check_tolerance, _force_outplace, _module_class)
File “/home/benjamin/.local/lib/python3.6/site-packages/torch/jit/init.py”, line 1027, in trace_module
module._c._create_method_from_trace(method_name, func, example_inputs, var_lookup_fn, _force_outplace)
RuntimeError: 0 INTERNAL ASSERT FAILED at /pytorch/torch/csrc/jit/ir/alias_analysis.cpp:318, please report a bug to PyTorch. We don’t have an op for aten::to but it isn’t a special case. Argument types: Tensor, None, int, Device, bool, bool, bool, int,