PyTorch JIT Vision

I have been following the development of the JIT mode in PyTorch and using it for some of my own research. What I want it to be able to extract a graph (forward and backward) and run some custom passes on it. I have been using the jit.trace till now, but it turns out that gradients for things like sigmoid etc are not traced properly because of ( ). I was also looking at the more recent JIT script, but it turns out it only supports a small set of operations ATM. I just wanted to understand what is vision for JIT in PyTorch. Will the JIT script be expanded to cover all operations ? Will the tracing mechanism still be supported and developed further ?

Thanks and Regards
Tapan Chugh

1 Like

Maybe @apaszke can help answering this. I’m interested in knowing as well :slight_smile:

The JIT is still a bit of an experimental project, so don’t take the answers I will give here for granted as they can change before the release. I also don’t want to get in depth into the details, but the general idea is that the JIT itself will support all kinds of PyTorch operations, but will treat most of them as non-optimizable black-boxes. On the other hand the 20% of the ops that make up for 80% of the graphs will be well understood and those will be the ones that will be automatically specialized. The tracing mechanism is there to stay and will be developed.

Can you please provide us with a case that causes the constants to appear in backward? This sounds like a bug to me and should be fixed. NB if I recall correctly we’ve been handling this incorrectly for some time, but it should be ok in master.


Thanks for the reply. My code was stale and hence the constant bugs. I updated after your reply and they are resolved now.
Another question - Given that tracing works, what is the use case for the JIT script ?