TorchScript trace size()

Ah, I see better from your comment and reading the documentation closer also (https://pytorch.org/docs/stable/jit.html), control flow has to be done with scripting not tracing, because tracing will all depend on the input you give it when tracing. Thanks, I switched this over to scripting using:

traced_script_module = torch.jit.script(wrapper)

instead of

traced_script_module = torch.jit.trace(wrapper,example)

I have some related issues with mixing CPU/GPU here, Mixing GPU/CPU code in TorchScript traced model, but in general I’m seeing scripting is answering some of those questions (e.g. with torch.no_grad throws an error, saying its not allowed). I’m still not certain about .cuda() and .cpu() mixing in a scripted module, and if I can have separate locations for some parameters, but this answers this question, and I’ll await answers to the others in the other post.