I recently encountered an issue while deploying a D2Go FPN model on an Android application. Initially, I had a model trained and tested with one class using the “faster_rcnn_fbnetv3a_C4.yaml” configuration, and it functioned flawlessly on Android. This model was configured with pre-loaded weights from the “faster_rcnn_fbnetv3a_C4.yaml” checkpoint and the configuration was merged appropriately.
Due to the need to detect smaller objects, I switched to a D2Go FPN model (both model and config were updated accordingly). However, when attempting to run this new model on Android, I consistently encounter a fatal exception error. The error message indicates an issue related to the mismatch in the output channel size of weight and bias. Below is the error log for reference:
FATAL EXCEPTION: Thread-2
Process: org.pytorch.demo.objectdetection, PID: 4275
com.facebook.jni.CppException: Output channel size of weight and bias must match.
Debug info for handle(s): debug_handles:{-1}, was not found.
Exception raised from apply_impl at /home/agunapal/pytorch/aten/src/ATen/native/quantized/cpu/qconv.cpp:899 (most recent call first):
(no backtrace available)
at org.pytorch.LiteNativePeer.forward(Native Method)
at org.pytorch.Module.forward(Module.java:52)
at org.pytorch.demo.objectdetection.MainActivity.run(MainActivity.java:332)
I am seeking suggestions or solutions on how to address this issue. Any insights into what might be causing this error and how to resolve it would be greatly appreciated.