So i can just use torch.jit.ScriptModule for every model and get speedup?

Here i have simple test on noise.
With jit 47 secs
https://colab.research.google.com/drive/1UafCVRxnTNyWFr79f4HGqg-a2VwuLHAp?usp=sharing
Without 56 secs
https://colab.research.google.com/drive/1dKLRgqsVAOrQSe8USSVfeg9a0xFYvCqw?usp=sharing

Does it work as it looks or i got something wrong?

Also, seems like jit version consumes more memory.