TorchScript trace size()

I want to add preprocessing to a wrapper class around my forward model, and trace with TorchScript to be able to use in a C++ code. The normalization of input data, etc. works fine, but I want to be able to seamlessly handle inputs with different channel numbers (padding with 0’s if 1 channel). I tried doing this with if input.size()[0]==1, but get a warning that makes me think this might be ignored:

create_torchscript_model.py:37: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.

I try keeping everything in tensors:

        if torch.tensor(input.size())[0]==torch.tensor(1):
            input = F.pad(input,(0,0,0,0,0,0,1,0))

but get similar warnings. Is there a better way to do this, or should I just keep it in the C++ code?

Tracing is using the currently provided inputs and uses it as constants as given in the warning.
Could you try to script the module instead?

Ah, I see better from your comment and reading the documentation closer also (https://pytorch.org/docs/stable/jit.html), control flow has to be done with scripting not tracing, because tracing will all depend on the input you give it when tracing. Thanks, I switched this over to scripting using:

traced_script_module = torch.jit.script(wrapper)

instead of

traced_script_module = torch.jit.trace(wrapper,example)

I have some related issues with mixing CPU/GPU here, Mixing GPU/CPU code in TorchScript traced model, but in general I’m seeing scripting is answering some of those questions (e.g. with torch.no_grad throws an error, saying its not allowed). I’m still not certain about .cuda() and .cpu() mixing in a scripted module, and if I can have separate locations for some parameters, but this answers this question, and I’ll await answers to the others in the other post.