I try to run a seq2seq nmt model follow the pytorch demo application project.

I transform my own model to lite format using this scripts:

encoder_input=torch.tensor([[[429]]])

encoder_length=torch.tensor([1])decoder_input = torch.tensor([[[123]]])

encoder_output = torch.zeros([1,1,512])

decoder_step = torch.tensor([0])decoder_output = torch.zeros([1,1,256])

traced_encoder = torch.jit.trace(quant_encoder, (encoder_input, encoder_length))

args = [decoder_input, encoder_output, decoder_step, encoder_length]

kwargs = {“memory_lengths”: encoder_length}

quant_decoder.init_state(encoder_input, None, None)

traced_decoder = torch.jit.trace(quant_decoder, args, strict=False)

traced_generator = torch.jit.trace(quant_generator, (decoder_output))traced_encoder_optimized = optimize_for_mobile(traced_encoder)

traced_encoder_optimized._save_for_lite_interpreter(“optimized_encoder_150k.ptl”)

traced_decoder_optimized = optimize_for_mobile(traced_decoder)

traced_decoder_optimized._save_for_lite_interpreter(“optimized_decoder_150k.ptl”)

traced_generator_optimized = optimize_for_mobile(traced_generator)

traced_generator_optimized._save_for_lite_interpreter(“optimized_generator_150k.ptl”)

It looks like it completed successfully, at least without reporting an error. Then I try to run my app throw android like this:

`int input_length = inputs.length; final long[] inputShape = new long[]{1, input_length, 1}; final long[] inputlengthShape = new long[]{input_length};; final long[] outputsShape = new long[]{MAX_LENGTH, HIDDEN_SIZE}; final FloatBuffer outputsTensorBuffer = Tensor.allocateFloatBuffer(MAX_LENGTH * HIDDEN_SIZE); LongBuffer inputlengthTensorBuffer = Tensor.allocateLongBuffer(1); Tensor inputlengthTensor = Tensor.fromBlob(inputlengthTensorBuffer, inputlengthShape); for (int i=0; i<inputs.length; i++) { LongBuffer inputTensorBuffer = Tensor.allocateLongBuffer(input_length); inputTensorBuffer.put(inputs[i]); Tensor inputTensor = Tensor.fromBlob(inputTensorBuffer, inputShape); Log.i("INFO input tensor", Arrays.toString(inputTensor.getDataAsLongArray())); Log.i("INFO input tensor shape", Arrays.toString(inputTensor.shape())); Log.i("INFO input length tensor shape", Arrays.toString(inputlengthTensor.shape())); IValue temp = IValue.from(inputTensor); Log.i("INFO input tensor test", Arrays.toString(temp.toTensor().getDataAsLongArray())); final IValue[] outputTuple = mModuleEncoder.forward(IValue.from(inputTensor), IValue.from(inputlengthTensor)).toTuple();`

The input is only one integer representing a character. But it always return an error:

com.facebook.jni.CppException: Expected batch2_sizes[0] == bs && batch2_sizes[1] == contraction_size to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)

It’s a litte hard to debug this as I can’t get the middle result. I’ve been struggling with this issue for two days…

Any response is appreciate !